Dec 13 02:21:49.093715 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:21:49.093750 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:21:49.093767 kernel: BIOS-provided physical RAM map: Dec 13 02:21:49.093778 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:21:49.093789 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:21:49.093799 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:21:49.093816 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 02:21:49.093828 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 02:21:49.093840 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 02:21:49.093851 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:21:49.093863 kernel: NX (Execute Disable) protection: active Dec 13 02:21:49.093875 kernel: SMBIOS 2.7 present. Dec 13 02:21:49.093886 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 02:21:49.093898 kernel: Hypervisor detected: KVM Dec 13 02:21:49.093916 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:21:49.093928 kernel: kvm-clock: cpu 0, msr 4419b001, primary cpu clock Dec 13 02:21:49.093941 kernel: kvm-clock: using sched offset of 7483562253 cycles Dec 13 02:21:49.093955 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:21:49.093967 kernel: tsc: Detected 2499.994 MHz processor Dec 13 02:21:49.093980 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:21:49.093996 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:21:49.094009 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 02:21:49.094022 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:21:49.094035 kernel: Using GB pages for direct mapping Dec 13 02:21:49.094048 kernel: ACPI: Early table checksum verification disabled Dec 13 02:21:49.094060 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 02:21:49.094073 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 02:21:49.094086 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 02:21:49.094099 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 02:21:49.094114 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 02:21:49.094126 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:21:49.094139 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 02:21:49.094151 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 02:21:49.094163 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 02:21:49.094176 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 02:21:49.094189 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 02:21:49.094201 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:21:49.094214 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 02:21:49.094225 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 02:21:49.094236 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 02:21:49.094252 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 02:21:49.094266 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 02:21:49.094278 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 02:21:49.094290 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 02:21:49.094305 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 02:21:49.094414 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 02:21:49.094433 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 02:21:49.094446 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:21:49.094459 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:21:49.094471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 02:21:49.094483 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 02:21:49.094496 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 02:21:49.094531 kernel: Zone ranges: Dec 13 02:21:49.094544 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:21:49.094558 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 02:21:49.094571 kernel: Normal empty Dec 13 02:21:49.094585 kernel: Movable zone start for each node Dec 13 02:21:49.094597 kernel: Early memory node ranges Dec 13 02:21:49.094609 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:21:49.094622 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 02:21:49.094635 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 02:21:49.094651 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:21:49.094663 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:21:49.094676 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 02:21:49.094689 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:21:49.094702 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:21:49.094715 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 02:21:49.094729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:21:49.094742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:21:49.094755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:21:49.094771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:21:49.094784 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:21:49.094797 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:21:49.094810 kernel: TSC deadline timer available Dec 13 02:21:49.094823 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:21:49.094836 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 02:21:49.094849 kernel: Booting paravirtualized kernel on KVM Dec 13 02:21:49.094862 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:21:49.094876 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:21:49.094893 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:21:49.094908 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:21:49.094922 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:21:49.094935 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 02:21:49.094949 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:21:49.094963 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:21:49.094977 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 02:21:49.094991 kernel: Policy zone: DMA32 Dec 13 02:21:49.095008 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:21:49.095027 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:21:49.095041 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:21:49.095055 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:21:49.095070 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:21:49.095085 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123080K reserved, 0K cma-reserved) Dec 13 02:21:49.095099 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:21:49.095114 kernel: Kernel/User page tables isolation: enabled Dec 13 02:21:49.095128 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:21:49.095145 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:21:49.095159 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:21:49.095174 kernel: rcu: RCU event tracing is enabled. Dec 13 02:21:49.095187 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:21:49.095199 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:21:49.095211 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:21:49.095224 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:21:49.095236 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:21:49.095249 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:21:49.095263 kernel: random: crng init done Dec 13 02:21:49.095275 kernel: Console: colour VGA+ 80x25 Dec 13 02:21:49.095289 kernel: printk: console [ttyS0] enabled Dec 13 02:21:49.095302 kernel: ACPI: Core revision 20210730 Dec 13 02:21:49.095317 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 02:21:49.095330 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:21:49.095342 kernel: x2apic enabled Dec 13 02:21:49.095354 kernel: Switched APIC routing to physical x2apic. Dec 13 02:21:49.095366 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 13 02:21:49.095381 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Dec 13 02:21:49.095394 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:21:49.095406 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:21:49.095419 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:21:49.095442 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:21:49.095457 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:21:49.095470 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:21:49.095483 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 02:21:49.095497 kernel: RETBleed: Vulnerable Dec 13 02:21:49.095540 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:21:49.095553 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:21:49.095567 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:21:49.095580 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 02:21:49.095593 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:21:49.095607 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:21:49.095622 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:21:49.095636 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 02:21:49.095649 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 02:21:49.095663 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 02:21:49.095679 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 02:21:49.095693 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 02:21:49.095707 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 02:21:49.095721 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:21:49.095734 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 02:21:49.095805 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 02:21:49.095820 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 02:21:49.095835 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 02:21:49.095849 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 02:21:49.095863 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 02:21:49.095877 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 02:21:49.095891 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:21:49.095908 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:21:49.095922 kernel: LSM: Security Framework initializing Dec 13 02:21:49.095936 kernel: SELinux: Initializing. Dec 13 02:21:49.095950 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:21:49.095964 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:21:49.095978 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 02:21:49.095992 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 02:21:49.096007 kernel: signal: max sigframe size: 3632 Dec 13 02:21:49.096022 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:21:49.096036 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:21:49.096053 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:21:49.096067 kernel: x86: Booting SMP configuration: Dec 13 02:21:49.096081 kernel: .... node #0, CPUs: #1 Dec 13 02:21:49.096096 kernel: kvm-clock: cpu 1, msr 4419b041, secondary cpu clock Dec 13 02:21:49.096110 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 02:21:49.096126 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:21:49.096141 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:21:49.096155 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:21:49.096169 kernel: smpboot: Max logical packages: 1 Dec 13 02:21:49.096186 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Dec 13 02:21:49.096200 kernel: devtmpfs: initialized Dec 13 02:21:49.096214 kernel: x86/mm: Memory block size: 128MB Dec 13 02:21:49.096229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:21:49.096244 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:21:49.096259 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:21:49.096273 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:21:49.096288 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:21:49.096303 kernel: audit: type=2000 audit(1734056508.058:1): state=initialized audit_enabled=0 res=1 Dec 13 02:21:49.096321 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:21:49.096335 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:21:49.096350 kernel: cpuidle: using governor menu Dec 13 02:21:49.096364 kernel: ACPI: bus type PCI registered Dec 13 02:21:49.096378 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:21:49.096393 kernel: dca service started, version 1.12.1 Dec 13 02:21:49.096407 kernel: PCI: Using configuration type 1 for base access Dec 13 02:21:49.096421 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:21:49.096436 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:21:49.096453 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:21:49.096467 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:21:49.096482 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:21:49.096496 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:21:49.096528 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:21:49.096702 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:21:49.096719 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:21:49.096733 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:21:49.096747 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:21:49.096764 kernel: ACPI: Interpreter enabled Dec 13 02:21:49.096779 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:21:49.096793 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:21:49.096809 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:21:49.096828 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:21:49.096843 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:21:49.097088 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:21:49.097428 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:21:49.097454 kernel: acpiphp: Slot [3] registered Dec 13 02:21:49.097469 kernel: acpiphp: Slot [4] registered Dec 13 02:21:49.097483 kernel: acpiphp: Slot [5] registered Dec 13 02:21:49.097498 kernel: acpiphp: Slot [6] registered Dec 13 02:21:49.097543 kernel: acpiphp: Slot [7] registered Dec 13 02:21:49.097558 kernel: acpiphp: Slot [8] registered Dec 13 02:21:49.097572 kernel: acpiphp: Slot [9] registered Dec 13 02:21:49.097586 kernel: acpiphp: Slot [10] registered Dec 13 02:21:49.097600 kernel: acpiphp: Slot [11] registered Dec 13 02:21:49.097618 kernel: acpiphp: Slot [12] registered Dec 13 02:21:49.097632 kernel: acpiphp: Slot [13] registered Dec 13 02:21:49.097646 kernel: acpiphp: Slot [14] registered Dec 13 02:21:49.097660 kernel: acpiphp: Slot [15] registered Dec 13 02:21:49.097674 kernel: acpiphp: Slot [16] registered Dec 13 02:21:49.097688 kernel: acpiphp: Slot [17] registered Dec 13 02:21:49.097701 kernel: acpiphp: Slot [18] registered Dec 13 02:21:49.097715 kernel: acpiphp: Slot [19] registered Dec 13 02:21:49.097877 kernel: acpiphp: Slot [20] registered Dec 13 02:21:49.097899 kernel: acpiphp: Slot [21] registered Dec 13 02:21:49.097913 kernel: acpiphp: Slot [22] registered Dec 13 02:21:49.097927 kernel: acpiphp: Slot [23] registered Dec 13 02:21:49.097941 kernel: acpiphp: Slot [24] registered Dec 13 02:21:49.097956 kernel: acpiphp: Slot [25] registered Dec 13 02:21:49.097970 kernel: acpiphp: Slot [26] registered Dec 13 02:21:49.097984 kernel: acpiphp: Slot [27] registered Dec 13 02:21:49.097998 kernel: acpiphp: Slot [28] registered Dec 13 02:21:49.098012 kernel: acpiphp: Slot [29] registered Dec 13 02:21:49.098026 kernel: acpiphp: Slot [30] registered Dec 13 02:21:49.098042 kernel: acpiphp: Slot [31] registered Dec 13 02:21:49.098056 kernel: PCI host bridge to bus 0000:00 Dec 13 02:21:49.098294 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:21:49.098421 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:21:49.098609 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:21:49.098723 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:21:49.098852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:21:49.098997 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:21:49.099151 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:21:49.099289 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 02:21:49.099486 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:21:49.099920 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 02:21:49.100048 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 02:21:49.100180 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 02:21:49.100316 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 02:21:49.100444 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 02:21:49.100581 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 02:21:49.100769 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 02:21:49.100911 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 02:21:49.101170 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 02:21:49.101315 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 02:21:49.101446 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:21:49.101601 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 02:21:49.101726 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 02:21:49.101858 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 02:21:49.101981 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 02:21:49.102000 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:21:49.102019 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:21:49.102034 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:21:49.102049 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:21:49.102064 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:21:49.102079 kernel: iommu: Default domain type: Translated Dec 13 02:21:49.102094 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:21:49.102214 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 02:21:49.102335 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:21:49.102692 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 02:21:49.102719 kernel: vgaarb: loaded Dec 13 02:21:49.102732 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:21:49.102745 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:21:49.102775 kernel: PTP clock support registered Dec 13 02:21:49.102882 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:21:49.102894 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:21:49.102907 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:21:49.102920 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 02:21:49.102935 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 02:21:49.102948 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 02:21:49.102960 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:21:49.102972 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:21:49.102984 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:21:49.103041 kernel: pnp: PnP ACPI init Dec 13 02:21:49.103054 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:21:49.103066 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:21:49.103079 kernel: NET: Registered PF_INET protocol family Dec 13 02:21:49.103094 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:21:49.103106 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:21:49.103119 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:21:49.103131 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:21:49.103143 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 02:21:49.103156 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:21:49.103168 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:21:49.103181 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:21:49.103193 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:21:49.103208 kernel: NET: Registered PF_XDP protocol family Dec 13 02:21:49.103327 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:21:49.103431 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:21:49.103608 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:21:49.103931 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:21:49.104052 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:21:49.104230 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 02:21:49.104254 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:21:49.104267 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:21:49.104280 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 13 02:21:49.104293 kernel: clocksource: Switched to clocksource tsc Dec 13 02:21:49.104305 kernel: Initialise system trusted keyrings Dec 13 02:21:49.104316 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:21:49.104329 kernel: Key type asymmetric registered Dec 13 02:21:49.104341 kernel: Asymmetric key parser 'x509' registered Dec 13 02:21:49.104353 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:21:49.104368 kernel: io scheduler mq-deadline registered Dec 13 02:21:49.104381 kernel: io scheduler kyber registered Dec 13 02:21:49.104393 kernel: io scheduler bfq registered Dec 13 02:21:49.104405 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:21:49.104417 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:21:49.104428 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:21:49.104440 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:21:49.104453 kernel: i8042: Warning: Keylock active Dec 13 02:21:49.104465 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:21:49.104479 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:21:49.104677 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:21:49.104937 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:21:49.105104 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:21:48 UTC (1734056508) Dec 13 02:21:49.105211 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:21:49.105226 kernel: intel_pstate: CPU model not supported Dec 13 02:21:49.105239 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:21:49.105252 kernel: Segment Routing with IPv6 Dec 13 02:21:49.105268 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:21:49.105281 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:21:49.105294 kernel: Key type dns_resolver registered Dec 13 02:21:49.105306 kernel: IPI shorthand broadcast: enabled Dec 13 02:21:49.105318 kernel: sched_clock: Marking stable (372849440, 244664857)->(731647207, -114132910) Dec 13 02:21:49.105330 kernel: registered taskstats version 1 Dec 13 02:21:49.105342 kernel: Loading compiled-in X.509 certificates Dec 13 02:21:49.105354 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:21:49.105366 kernel: Key type .fscrypt registered Dec 13 02:21:49.105381 kernel: Key type fscrypt-provisioning registered Dec 13 02:21:49.105394 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:21:49.105406 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:21:49.105418 kernel: ima: No architecture policies found Dec 13 02:21:49.105430 kernel: clk: Disabling unused clocks Dec 13 02:21:49.105442 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:21:49.105455 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:21:49.105466 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:21:49.105479 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:21:49.105577 kernel: Run /init as init process Dec 13 02:21:49.105592 kernel: with arguments: Dec 13 02:21:49.105604 kernel: /init Dec 13 02:21:49.105615 kernel: with environment: Dec 13 02:21:49.105627 kernel: HOME=/ Dec 13 02:21:49.105640 kernel: TERM=linux Dec 13 02:21:49.105652 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:21:49.105668 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:21:49.105687 systemd[1]: Detected virtualization amazon. Dec 13 02:21:49.105700 systemd[1]: Detected architecture x86-64. Dec 13 02:21:49.105712 systemd[1]: Running in initrd. Dec 13 02:21:49.105726 systemd[1]: No hostname configured, using default hostname. Dec 13 02:21:49.105753 systemd[1]: Hostname set to . Dec 13 02:21:49.105772 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:21:49.105786 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:21:49.105799 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:21:49.105813 systemd[1]: Reached target cryptsetup.target. Dec 13 02:21:49.105826 systemd[1]: Reached target paths.target. Dec 13 02:21:49.105839 systemd[1]: Reached target slices.target. Dec 13 02:21:49.105852 systemd[1]: Reached target swap.target. Dec 13 02:21:49.105866 systemd[1]: Reached target timers.target. Dec 13 02:21:49.105884 systemd[1]: Listening on iscsid.socket. Dec 13 02:21:49.105897 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:21:49.105910 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:21:49.105924 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:21:49.105938 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:21:49.105951 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:21:49.105968 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:21:49.105982 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:21:49.105995 systemd[1]: Reached target sockets.target. Dec 13 02:21:49.106011 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:21:49.106025 systemd[1]: Finished network-cleanup.service. Dec 13 02:21:49.106039 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:21:49.106052 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:21:49.106065 systemd[1]: Starting systemd-journald.service... Dec 13 02:21:49.106078 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:21:49.106092 systemd[1]: Starting systemd-resolved.service... Dec 13 02:21:49.106106 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:21:49.106120 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:21:49.106136 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:21:49.106149 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:21:49.106163 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:21:49.106183 systemd-journald[185]: Journal started Dec 13 02:21:49.106251 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2fe16a023dbb17103fa6677f15ac9c) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:21:49.099999 systemd-resolved[187]: Positive Trust Anchors: Dec 13 02:21:49.282689 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:21:49.282735 kernel: Bridge firewalling registered Dec 13 02:21:49.282755 kernel: SCSI subsystem initialized Dec 13 02:21:49.282770 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:21:49.282791 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:21:49.282809 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:21:49.282824 systemd[1]: Started systemd-journald.service. Dec 13 02:21:49.282844 kernel: audit: type=1130 audit(1734056509.276:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.100012 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:21:49.100066 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:21:49.111453 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:21:49.116329 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 02:21:49.175579 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:21:49.306102 kernel: audit: type=1130 audit(1734056509.292:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.306129 kernel: audit: type=1130 audit(1734056509.292:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.242187 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 02:21:49.317487 kernel: audit: type=1130 audit(1734056509.305:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.317546 kernel: audit: type=1130 audit(1734056509.311:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.292930 systemd[1]: Started systemd-resolved.service. Dec 13 02:21:49.300298 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:21:49.306271 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:21:49.312055 systemd[1]: Reached target nss-lookup.target. Dec 13 02:21:49.318896 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:21:49.320457 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:21:49.332449 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:21:49.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.338583 kernel: audit: type=1130 audit(1734056509.333:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.349296 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:21:49.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.352533 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:21:49.360729 kernel: audit: type=1130 audit(1734056509.349:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.369116 dracut-cmdline[208]: dracut-dracut-053 Dec 13 02:21:49.372235 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:21:49.454542 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:21:49.498576 kernel: iscsi: registered transport (tcp) Dec 13 02:21:49.530531 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:21:49.530606 kernel: QLogic iSCSI HBA Driver Dec 13 02:21:49.567383 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:21:49.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.570576 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:21:49.578586 kernel: audit: type=1130 audit(1734056509.568:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:49.629574 kernel: raid6: avx512x4 gen() 11603 MB/s Dec 13 02:21:49.646554 kernel: raid6: avx512x4 xor() 6919 MB/s Dec 13 02:21:49.664584 kernel: raid6: avx512x2 gen() 16385 MB/s Dec 13 02:21:49.681563 kernel: raid6: avx512x2 xor() 21821 MB/s Dec 13 02:21:49.698562 kernel: raid6: avx512x1 gen() 15779 MB/s Dec 13 02:21:49.715592 kernel: raid6: avx512x1 xor() 18344 MB/s Dec 13 02:21:49.732561 kernel: raid6: avx2x4 gen() 15760 MB/s Dec 13 02:21:49.750561 kernel: raid6: avx2x4 xor() 4742 MB/s Dec 13 02:21:49.769568 kernel: raid6: avx2x2 gen() 6641 MB/s Dec 13 02:21:49.787579 kernel: raid6: avx2x2 xor() 8003 MB/s Dec 13 02:21:49.810277 kernel: raid6: avx2x1 gen() 5437 MB/s Dec 13 02:21:49.830562 kernel: raid6: avx2x1 xor() 4798 MB/s Dec 13 02:21:49.847747 kernel: raid6: sse2x4 gen() 5206 MB/s Dec 13 02:21:49.864592 kernel: raid6: sse2x4 xor() 2645 MB/s Dec 13 02:21:49.882561 kernel: raid6: sse2x2 gen() 3156 MB/s Dec 13 02:21:49.900557 kernel: raid6: sse2x2 xor() 3374 MB/s Dec 13 02:21:49.917565 kernel: raid6: sse2x1 gen() 4062 MB/s Dec 13 02:21:49.935230 kernel: raid6: sse2x1 xor() 3206 MB/s Dec 13 02:21:49.935315 kernel: raid6: using algorithm avx512x2 gen() 16385 MB/s Dec 13 02:21:49.935335 kernel: raid6: .... xor() 21821 MB/s, rmw enabled Dec 13 02:21:49.936465 kernel: raid6: using avx512x2 recovery algorithm Dec 13 02:21:49.954539 kernel: xor: automatically using best checksumming function avx Dec 13 02:21:50.078545 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:21:50.091324 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:21:50.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:50.093688 systemd[1]: Starting systemd-udevd.service... Dec 13 02:21:50.099654 kernel: audit: type=1130 audit(1734056510.090:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:50.093000 audit: BPF prog-id=7 op=LOAD Dec 13 02:21:50.093000 audit: BPF prog-id=8 op=LOAD Dec 13 02:21:50.134763 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 02:21:50.140991 systemd[1]: Started systemd-udevd.service. Dec 13 02:21:50.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:50.143315 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:21:50.191958 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Dec 13 02:21:50.241891 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:21:50.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:50.245409 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:21:50.307263 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:21:50.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:50.393932 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:21:50.402696 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 02:21:50.433901 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 02:21:50.434073 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 02:21:50.434279 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:54:55:74:3c:85 Dec 13 02:21:50.436606 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:21:50.436652 kernel: AES CTR mode by8 optimization enabled Dec 13 02:21:50.438015 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 02:21:50.438569 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:21:50.444161 (udev-worker)[433]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:50.452191 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 02:21:50.456877 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:21:50.456944 kernel: GPT:9289727 != 16777215 Dec 13 02:21:50.456962 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:21:50.458610 kernel: GPT:9289727 != 16777215 Dec 13 02:21:50.458664 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:21:50.460530 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:21:50.565552 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (437) Dec 13 02:21:50.675859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:21:50.687672 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:21:50.690480 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:21:50.697663 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:21:50.704939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:21:50.707013 systemd[1]: Starting disk-uuid.service... Dec 13 02:21:50.715654 disk-uuid[593]: Primary Header is updated. Dec 13 02:21:50.715654 disk-uuid[593]: Secondary Entries is updated. Dec 13 02:21:50.715654 disk-uuid[593]: Secondary Header is updated. Dec 13 02:21:50.723535 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:21:50.730968 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:21:51.745601 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:21:51.746040 disk-uuid[594]: The operation has completed successfully. Dec 13 02:21:51.901374 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:21:51.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:51.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:51.901486 systemd[1]: Finished disk-uuid.service. Dec 13 02:21:51.909913 systemd[1]: Starting verity-setup.service... Dec 13 02:21:51.944729 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:21:52.111706 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:21:52.122949 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:21:52.126673 systemd[1]: Finished verity-setup.service. Dec 13 02:21:52.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.320806 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:21:52.321438 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:21:52.323184 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:21:52.325546 systemd[1]: Starting ignition-setup.service... Dec 13 02:21:52.328316 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:21:52.356254 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:21:52.356320 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:21:52.356339 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:21:52.369534 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:21:52.385402 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:21:52.396702 systemd[1]: Finished ignition-setup.service. Dec 13 02:21:52.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.397895 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:21:52.431146 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:21:52.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.432000 audit: BPF prog-id=9 op=LOAD Dec 13 02:21:52.434027 systemd[1]: Starting systemd-networkd.service... Dec 13 02:21:52.477777 systemd-networkd[1107]: lo: Link UP Dec 13 02:21:52.478328 systemd-networkd[1107]: lo: Gained carrier Dec 13 02:21:52.480256 systemd-networkd[1107]: Enumeration completed Dec 13 02:21:52.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.480390 systemd[1]: Started systemd-networkd.service. Dec 13 02:21:52.481863 systemd[1]: Reached target network.target. Dec 13 02:21:52.486958 systemd-networkd[1107]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:21:52.492792 systemd[1]: Starting iscsiuio.service... Dec 13 02:21:52.506623 systemd-networkd[1107]: eth0: Link UP Dec 13 02:21:52.506793 systemd-networkd[1107]: eth0: Gained carrier Dec 13 02:21:52.509715 systemd[1]: Started iscsiuio.service. Dec 13 02:21:52.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.511876 systemd[1]: Starting iscsid.service... Dec 13 02:21:52.516722 iscsid[1112]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:21:52.516722 iscsid[1112]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:21:52.516722 iscsid[1112]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:21:52.516722 iscsid[1112]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:21:52.516722 iscsid[1112]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:21:52.516722 iscsid[1112]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:21:52.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.518659 systemd[1]: Started iscsid.service. Dec 13 02:21:52.528496 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:21:52.536636 systemd-networkd[1107]: eth0: DHCPv4 address 172.31.16.161/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:21:52.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.547612 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:21:52.549272 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:21:52.551650 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:21:52.552648 systemd[1]: Reached target remote-fs.target. Dec 13 02:21:52.554620 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:21:52.566904 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:21:52.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.872078 ignition[1074]: Ignition 2.14.0 Dec 13 02:21:52.872093 ignition[1074]: Stage: fetch-offline Dec 13 02:21:52.872235 ignition[1074]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:52.872276 ignition[1074]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:52.895694 ignition[1074]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:52.897361 ignition[1074]: Ignition finished successfully Dec 13 02:21:52.898686 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:21:52.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.900930 systemd[1]: Starting ignition-fetch.service... Dec 13 02:21:52.927843 ignition[1131]: Ignition 2.14.0 Dec 13 02:21:52.927858 ignition[1131]: Stage: fetch Dec 13 02:21:52.928066 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:52.928108 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:52.941141 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:52.942482 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:52.951268 ignition[1131]: INFO : PUT result: OK Dec 13 02:21:52.953951 ignition[1131]: DEBUG : parsed url from cmdline: "" Dec 13 02:21:52.953951 ignition[1131]: INFO : no config URL provided Dec 13 02:21:52.953951 ignition[1131]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:21:52.957523 ignition[1131]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 02:21:52.957523 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:52.959879 ignition[1131]: INFO : PUT result: OK Dec 13 02:21:52.959879 ignition[1131]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 02:21:52.961842 ignition[1131]: INFO : GET result: OK Dec 13 02:21:52.961842 ignition[1131]: DEBUG : parsing config with SHA512: 471317ab238b0d9e76b4f0cd6300f860814d82fa18e3f2635ea5814dccaf54b0e131198327727897133867ffd4ef5c837017615b9aaef50e59fc19544e314530 Dec 13 02:21:52.971777 unknown[1131]: fetched base config from "system" Dec 13 02:21:52.971793 unknown[1131]: fetched base config from "system" Dec 13 02:21:52.971802 unknown[1131]: fetched user config from "aws" Dec 13 02:21:52.978391 ignition[1131]: fetch: fetch complete Dec 13 02:21:52.978403 ignition[1131]: fetch: fetch passed Dec 13 02:21:52.978472 ignition[1131]: Ignition finished successfully Dec 13 02:21:52.981968 systemd[1]: Finished ignition-fetch.service. Dec 13 02:21:52.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:52.983954 systemd[1]: Starting ignition-kargs.service... Dec 13 02:21:52.997041 ignition[1137]: Ignition 2.14.0 Dec 13 02:21:52.997051 ignition[1137]: Stage: kargs Dec 13 02:21:52.997287 ignition[1137]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:52.997314 ignition[1137]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:53.005832 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:53.007241 ignition[1137]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:53.009694 ignition[1137]: INFO : PUT result: OK Dec 13 02:21:53.014183 ignition[1137]: kargs: kargs passed Dec 13 02:21:53.014237 ignition[1137]: Ignition finished successfully Dec 13 02:21:53.016473 systemd[1]: Finished ignition-kargs.service. Dec 13 02:21:53.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:53.018546 systemd[1]: Starting ignition-disks.service... Dec 13 02:21:53.030316 ignition[1143]: Ignition 2.14.0 Dec 13 02:21:53.030330 ignition[1143]: Stage: disks Dec 13 02:21:53.030553 ignition[1143]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:53.030588 ignition[1143]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:53.040188 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:53.041523 ignition[1143]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:53.044377 ignition[1143]: INFO : PUT result: OK Dec 13 02:21:53.047272 ignition[1143]: disks: disks passed Dec 13 02:21:53.047336 ignition[1143]: Ignition finished successfully Dec 13 02:21:53.050051 systemd[1]: Finished ignition-disks.service. Dec 13 02:21:53.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:53.050287 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:21:53.052561 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:21:53.054421 systemd[1]: Reached target local-fs.target. Dec 13 02:21:53.056139 systemd[1]: Reached target sysinit.target. Dec 13 02:21:53.056194 systemd[1]: Reached target basic.target. Dec 13 02:21:53.062207 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:21:53.112234 systemd-fsck[1151]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:21:53.115450 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:21:53.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:53.118156 systemd[1]: Mounting sysroot.mount... Dec 13 02:21:53.143972 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:21:53.145389 systemd[1]: Mounted sysroot.mount. Dec 13 02:21:53.150969 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:21:53.155863 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:21:53.158110 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:21:53.158182 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:21:53.159836 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:21:53.165911 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:21:53.172844 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:21:53.182139 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:21:53.193735 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:21:53.197289 initrd-setup-root[1181]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:21:53.205959 initrd-setup-root[1189]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:21:53.217162 initrd-setup-root[1197]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:21:53.225544 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1179) Dec 13 02:21:53.230578 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:21:53.230637 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:21:53.230656 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:21:53.240538 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:21:53.250815 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:21:53.371070 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:21:53.379542 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 13 02:21:53.379597 kernel: audit: type=1130 audit(1734056513.373:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:53.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:53.374527 systemd[1]: Starting ignition-mount.service... Dec 13 02:21:53.384732 systemd[1]: Starting sysroot-boot.service... Dec 13 02:21:53.399225 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:21:53.399358 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:21:53.419844 ignition[1233]: INFO : Ignition 2.14.0 Dec 13 02:21:53.419844 ignition[1233]: INFO : Stage: mount Dec 13 02:21:53.422139 ignition[1233]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:53.422139 ignition[1233]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:53.456293 ignition[1233]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:53.458027 ignition[1233]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:53.460853 ignition[1233]: INFO : PUT result: OK Dec 13 02:21:53.465033 ignition[1233]: INFO : mount: mount passed Dec 13 02:21:53.465033 ignition[1233]: INFO : Ignition finished successfully Dec 13 02:21:53.467369 systemd[1]: Finished sysroot-boot.service. Dec 13 02:21:53.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:53.469155 systemd[1]: Finished ignition-mount.service. Dec 13 02:21:53.476926 kernel: audit: type=1130 audit(1734056513.468:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:53.481799 kernel: audit: type=1130 audit(1734056513.476:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:53.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:53.480830 systemd[1]: Starting ignition-files.service... Dec 13 02:21:53.495980 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:21:53.517532 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1243) Dec 13 02:21:53.517602 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:21:53.520071 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:21:53.520109 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:21:53.528529 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:21:53.534622 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:21:53.547062 ignition[1262]: INFO : Ignition 2.14.0 Dec 13 02:21:53.547062 ignition[1262]: INFO : Stage: files Dec 13 02:21:53.549186 ignition[1262]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:53.549186 ignition[1262]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:53.556971 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:53.561234 ignition[1262]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:53.563215 ignition[1262]: INFO : PUT result: OK Dec 13 02:21:53.568850 ignition[1262]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:21:53.580429 ignition[1262]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:21:53.580429 ignition[1262]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:21:53.594762 ignition[1262]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:21:53.597186 ignition[1262]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:21:53.599799 unknown[1262]: wrote ssh authorized keys file for user: core Dec 13 02:21:53.601155 ignition[1262]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:21:53.604050 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:21:53.609558 ignition[1262]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:21:53.710673 ignition[1262]: INFO : GET result: OK Dec 13 02:21:53.881778 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:21:53.884219 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:21:53.886172 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:21:53.886172 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:21:53.890423 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:21:53.897759 ignition[1262]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3282720753" Dec 13 02:21:53.904162 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1267) Dec 13 02:21:53.904190 ignition[1262]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3282720753": device or resource busy Dec 13 02:21:53.904190 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3282720753", trying btrfs: device or resource busy Dec 13 02:21:53.904190 ignition[1262]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3282720753" Dec 13 02:21:53.904190 ignition[1262]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3282720753" Dec 13 02:21:53.911587 ignition[1262]: INFO : op(3): [started] unmounting "/mnt/oem3282720753" Dec 13 02:21:53.911587 ignition[1262]: INFO : op(3): [finished] unmounting "/mnt/oem3282720753" Dec 13 02:21:53.911587 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:21:53.911587 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:21:53.911587 ignition[1262]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:21:54.419715 systemd-networkd[1107]: eth0: Gained IPv6LL Dec 13 02:21:54.448283 ignition[1262]: INFO : GET result: OK Dec 13 02:21:54.584866 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:21:54.587460 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:21:54.587460 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:21:54.628884 ignition[1262]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1960424793" Dec 13 02:21:54.634769 ignition[1262]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1960424793": device or resource busy Dec 13 02:21:54.634769 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1960424793", trying btrfs: device or resource busy Dec 13 02:21:54.634769 ignition[1262]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1960424793" Dec 13 02:21:54.650194 ignition[1262]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1960424793" Dec 13 02:21:54.650194 ignition[1262]: INFO : op(6): [started] unmounting "/mnt/oem1960424793" Dec 13 02:21:54.663960 ignition[1262]: INFO : op(6): [finished] unmounting "/mnt/oem1960424793" Dec 13 02:21:54.663960 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:21:54.663960 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:21:54.663960 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:21:54.651585 systemd[1]: mnt-oem1960424793.mount: Deactivated successfully. Dec 13 02:21:54.683242 ignition[1262]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3023678712" Dec 13 02:21:54.684950 ignition[1262]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3023678712": device or resource busy Dec 13 02:21:54.684950 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3023678712", trying btrfs: device or resource busy Dec 13 02:21:54.684950 ignition[1262]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3023678712" Dec 13 02:21:54.684950 ignition[1262]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3023678712" Dec 13 02:21:54.692142 ignition[1262]: INFO : op(9): [started] unmounting "/mnt/oem3023678712" Dec 13 02:21:54.692142 ignition[1262]: INFO : op(9): [finished] unmounting "/mnt/oem3023678712" Dec 13 02:21:54.692142 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:21:54.692142 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:21:54.701332 ignition[1262]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 02:21:54.695067 systemd[1]: mnt-oem3023678712.mount: Deactivated successfully. Dec 13 02:21:55.112887 ignition[1262]: INFO : GET result: OK Dec 13 02:21:55.411696 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:21:55.411696 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:21:55.416826 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:21:55.421670 ignition[1262]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3675759466" Dec 13 02:21:55.423225 ignition[1262]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3675759466": device or resource busy Dec 13 02:21:55.423225 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3675759466", trying btrfs: device or resource busy Dec 13 02:21:55.423225 ignition[1262]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3675759466" Dec 13 02:21:55.428764 ignition[1262]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3675759466" Dec 13 02:21:55.428764 ignition[1262]: INFO : op(c): [started] unmounting "/mnt/oem3675759466" Dec 13 02:21:55.428764 ignition[1262]: INFO : op(c): [finished] unmounting "/mnt/oem3675759466" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:21:55.433515 ignition[1262]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:21:55.430934 systemd[1]: mnt-oem3675759466.mount: Deactivated successfully. Dec 13 02:21:55.471262 ignition[1262]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:21:55.474599 ignition[1262]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:21:55.476390 ignition[1262]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:21:55.476390 ignition[1262]: INFO : files: files passed Dec 13 02:21:55.476390 ignition[1262]: INFO : Ignition finished successfully Dec 13 02:21:55.481147 systemd[1]: Finished ignition-files.service. Dec 13 02:21:55.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.487548 kernel: audit: type=1130 audit(1734056515.482:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.490795 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:21:55.496648 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:21:55.498224 systemd[1]: Starting ignition-quench.service... Dec 13 02:21:55.506099 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:21:55.506284 systemd[1]: Finished ignition-quench.service. Dec 13 02:21:55.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.516597 initrd-setup-root-after-ignition[1287]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:21:55.523167 kernel: audit: type=1130 audit(1734056515.511:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.523197 kernel: audit: type=1131 audit(1734056515.511:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.521252 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:21:55.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.527572 systemd[1]: Reached target ignition-complete.target. Dec 13 02:21:55.534276 kernel: audit: type=1130 audit(1734056515.527:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.535029 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:21:55.557245 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:21:55.558660 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:21:55.560995 systemd[1]: Reached target initrd-fs.target. Dec 13 02:21:55.562836 systemd[1]: Reached target initrd.target. Dec 13 02:21:55.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.564534 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:21:55.571037 kernel: audit: type=1130 audit(1734056515.560:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.570905 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:21:55.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.577542 kernel: audit: type=1131 audit(1734056515.560:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.584844 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:21:55.594668 kernel: audit: type=1130 audit(1734056515.584:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.590647 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:21:55.604269 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:21:55.604477 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:21:55.608496 systemd[1]: Stopped target timers.target. Dec 13 02:21:55.610433 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:21:55.612021 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:21:55.613876 systemd[1]: Stopped target initrd.target. Dec 13 02:21:55.615482 systemd[1]: Stopped target basic.target. Dec 13 02:21:55.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.617367 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:21:55.620027 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:21:55.625082 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:21:55.625215 systemd[1]: Stopped target remote-fs.target. Dec 13 02:21:55.633060 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:21:55.635420 systemd[1]: Stopped target sysinit.target. Dec 13 02:21:55.637470 systemd[1]: Stopped target local-fs.target. Dec 13 02:21:55.641343 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:21:55.643673 systemd[1]: Stopped target swap.target. Dec 13 02:21:55.645192 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:21:55.646239 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:21:55.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.648684 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:21:55.650473 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:21:55.650612 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:21:55.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.653408 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:21:55.653523 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:21:55.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.657860 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:21:55.658922 systemd[1]: Stopped ignition-files.service. Dec 13 02:21:55.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.661492 systemd[1]: Stopping ignition-mount.service... Dec 13 02:21:55.662981 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:21:55.663881 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:21:55.664012 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:21:55.676618 ignition[1301]: INFO : Ignition 2.14.0 Dec 13 02:21:55.676618 ignition[1301]: INFO : Stage: umount Dec 13 02:21:55.676618 ignition[1301]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:55.676618 ignition[1301]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:55.665109 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:21:55.665195 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:21:55.673162 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:21:55.685195 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:21:55.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.689487 ignition[1301]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:55.691082 ignition[1301]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:55.692327 ignition[1301]: INFO : PUT result: OK Dec 13 02:21:55.695592 ignition[1301]: INFO : umount: umount passed Dec 13 02:21:55.696588 ignition[1301]: INFO : Ignition finished successfully Dec 13 02:21:55.697059 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:21:55.697172 systemd[1]: Stopped ignition-mount.service. Dec 13 02:21:55.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.700141 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:21:55.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.700302 systemd[1]: Stopped ignition-disks.service. Dec 13 02:21:55.701248 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:21:55.701301 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:21:55.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.707012 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:21:55.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.707074 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:21:55.707971 systemd[1]: Stopped target network.target. Dec 13 02:21:55.717690 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:21:55.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.717787 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:21:55.719979 systemd[1]: Stopped target paths.target. Dec 13 02:21:55.725159 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:21:55.725406 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:21:55.731467 systemd[1]: Stopped target slices.target. Dec 13 02:21:55.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.732634 systemd[1]: Stopped target sockets.target. Dec 13 02:21:55.733692 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:21:55.733736 systemd[1]: Closed iscsid.socket. Dec 13 02:21:55.734498 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:21:55.734545 systemd[1]: Closed iscsiuio.socket. Dec 13 02:21:55.735276 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:21:55.735335 systemd[1]: Stopped ignition-setup.service. Dec 13 02:21:55.736984 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:21:55.746944 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:21:55.748967 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:21:55.750558 systemd-networkd[1107]: eth0: DHCPv6 lease lost Dec 13 02:21:55.752864 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:21:55.754233 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:21:55.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.756847 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:21:55.756955 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:21:55.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.759000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:21:55.759788 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:21:55.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.759890 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:21:55.760731 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:21:55.760773 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:21:55.762711 systemd[1]: Stopping network-cleanup.service... Dec 13 02:21:55.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.765950 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:21:55.766014 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:21:55.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.768801 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:21:55.768850 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:21:55.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.771941 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:21:55.771985 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:21:55.778574 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:21:55.781820 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:21:55.782435 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:21:55.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.786000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:21:55.782545 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:21:55.789092 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:21:55.789274 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:21:55.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.792797 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:21:55.792937 systemd[1]: Stopped network-cleanup.service. Dec 13 02:21:55.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.797081 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:21:55.797132 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:21:55.798116 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:21:55.799252 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:21:55.803386 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:21:55.803454 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:21:55.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.805461 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:21:55.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.805651 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:21:55.807569 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:21:55.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.807616 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:21:55.811619 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:21:55.813587 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:21:55.813646 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 02:21:55.816959 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:21:55.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.817004 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:21:55.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.821616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:21:55.821664 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:21:55.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:55.825001 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 02:21:55.825925 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:21:55.826010 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:21:55.827266 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:21:55.837825 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:21:55.854125 systemd[1]: Switching root. Dec 13 02:21:55.888429 iscsid[1112]: iscsid shutting down. Dec 13 02:21:55.889675 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Dec 13 02:21:55.889838 systemd-journald[185]: Journal stopped Dec 13 02:22:00.551603 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:22:00.551684 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:22:00.551704 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:22:00.551722 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:22:00.551744 kernel: SELinux: policy capability open_perms=1 Dec 13 02:22:00.551764 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:22:00.551782 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:22:00.551804 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:22:00.551821 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:22:00.551843 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:22:00.551859 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:22:00.551883 systemd[1]: Successfully loaded SELinux policy in 91.541ms. Dec 13 02:22:00.551918 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.281ms. Dec 13 02:22:00.551940 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:22:00.551960 systemd[1]: Detected virtualization amazon. Dec 13 02:22:00.551979 systemd[1]: Detected architecture x86-64. Dec 13 02:22:00.551997 systemd[1]: Detected first boot. Dec 13 02:22:00.552015 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:22:00.552033 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:22:00.552051 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:22:00.552070 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:22:00.552094 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:22:00.552115 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:22:00.552135 kernel: kauditd_printk_skb: 50 callbacks suppressed Dec 13 02:22:00.552156 kernel: audit: type=1334 audit(1734056520.263:85): prog-id=12 op=LOAD Dec 13 02:22:00.552174 kernel: audit: type=1334 audit(1734056520.263:86): prog-id=3 op=UNLOAD Dec 13 02:22:00.552191 kernel: audit: type=1334 audit(1734056520.266:87): prog-id=13 op=LOAD Dec 13 02:22:00.552212 kernel: audit: type=1334 audit(1734056520.267:88): prog-id=14 op=LOAD Dec 13 02:22:00.552231 kernel: audit: type=1334 audit(1734056520.267:89): prog-id=4 op=UNLOAD Dec 13 02:22:00.552427 kernel: audit: type=1334 audit(1734056520.267:90): prog-id=5 op=UNLOAD Dec 13 02:22:00.552458 kernel: audit: type=1131 audit(1734056520.270:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.552481 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:22:00.552502 kernel: audit: type=1334 audit(1734056520.276:92): prog-id=12 op=UNLOAD Dec 13 02:22:00.552546 systemd[1]: Stopped iscsiuio.service. Dec 13 02:22:00.552578 kernel: audit: type=1131 audit(1734056520.278:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.552600 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:22:00.552619 systemd[1]: Stopped iscsid.service. Dec 13 02:22:00.552638 kernel: audit: type=1131 audit(1734056520.287:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.552655 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:22:00.552674 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:22:00.552693 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:22:00.552724 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:22:00.552747 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:22:00.552766 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:22:00.552784 systemd[1]: Created slice system-getty.slice. Dec 13 02:22:00.552800 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:22:00.552817 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:22:00.552836 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:22:00.552854 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:22:00.552894 systemd[1]: Created slice user.slice. Dec 13 02:22:00.552913 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:22:00.552934 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:22:00.552953 systemd[1]: Set up automount boot.automount. Dec 13 02:22:00.552970 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:22:00.553055 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:22:00.553075 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:22:00.553093 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:22:00.553111 systemd[1]: Reached target integritysetup.target. Dec 13 02:22:00.553129 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:22:00.553147 systemd[1]: Reached target remote-fs.target. Dec 13 02:22:00.553217 systemd[1]: Reached target slices.target. Dec 13 02:22:00.553241 systemd[1]: Reached target swap.target. Dec 13 02:22:00.553259 systemd[1]: Reached target torcx.target. Dec 13 02:22:00.553281 systemd[1]: Reached target veritysetup.target. Dec 13 02:22:00.553298 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:22:00.553316 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:22:00.553334 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:22:00.553351 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:22:00.553369 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:22:00.553387 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:22:00.553409 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:22:00.553428 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:22:00.553446 systemd[1]: Mounting media.mount... Dec 13 02:22:00.553465 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:00.553483 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:22:00.553501 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:22:00.553546 systemd[1]: Mounting tmp.mount... Dec 13 02:22:00.553564 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:22:00.553581 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:00.553602 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:22:00.553619 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:22:00.553637 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:22:00.553655 systemd[1]: Starting modprobe@drm.service... Dec 13 02:22:00.553671 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:22:00.553689 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:22:00.553707 systemd[1]: Starting modprobe@loop.service... Dec 13 02:22:00.553725 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:22:00.553755 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:22:00.553775 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:22:00.553793 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:22:00.553811 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:22:00.553830 systemd[1]: Stopped systemd-journald.service. Dec 13 02:22:00.553849 systemd[1]: Starting systemd-journald.service... Dec 13 02:22:00.553868 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:22:00.553885 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:22:00.553904 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:22:00.554054 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:22:00.554081 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:22:00.554100 systemd[1]: Stopped verity-setup.service. Dec 13 02:22:00.554119 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:00.554136 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:22:00.554154 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:22:00.554176 systemd[1]: Mounted media.mount. Dec 13 02:22:00.554194 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:22:00.554213 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:22:00.554230 systemd[1]: Mounted tmp.mount. Dec 13 02:22:00.554248 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:22:00.554268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:22:00.554286 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:22:00.554304 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:22:00.554321 systemd[1]: Finished modprobe@drm.service. Dec 13 02:22:00.554339 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:22:00.554357 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:22:00.554375 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:22:00.554394 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:22:00.554411 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:22:00.554480 kernel: loop: module loaded Dec 13 02:22:00.554501 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:22:00.554643 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:22:00.554664 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:22:00.554687 systemd[1]: Finished modprobe@loop.service. Dec 13 02:22:00.554719 systemd-journald[1410]: Journal started Dec 13 02:22:00.554793 systemd-journald[1410]: Runtime Journal (/run/log/journal/ec2fe16a023dbb17103fa6677f15ac9c) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:21:56.314000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:21:56.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:21:56.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:21:56.447000 audit: BPF prog-id=10 op=LOAD Dec 13 02:21:56.447000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:21:56.448000 audit: BPF prog-id=11 op=LOAD Dec 13 02:21:56.448000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:21:56.673000 audit[1334]: AVC avc: denied { associate } for pid=1334 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:21:56.673000 audit[1334]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1317 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:21:56.673000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:21:56.676000 audit[1334]: AVC avc: denied { associate } for pid=1334 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:21:56.676000 audit[1334]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b9 a2=1ed a3=0 items=2 ppid=1317 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:21:56.676000 audit: CWD cwd="/" Dec 13 02:21:56.676000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:56.676000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:56.676000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:22:00.263000 audit: BPF prog-id=12 op=LOAD Dec 13 02:22:00.557557 systemd[1]: Started systemd-journald.service. Dec 13 02:22:00.263000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:22:00.266000 audit: BPF prog-id=13 op=LOAD Dec 13 02:22:00.267000 audit: BPF prog-id=14 op=LOAD Dec 13 02:22:00.267000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:22:00.267000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:22:00.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.276000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:22:00.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.474000 audit: BPF prog-id=15 op=LOAD Dec 13 02:22:00.474000 audit: BPF prog-id=16 op=LOAD Dec 13 02:22:00.474000 audit: BPF prog-id=17 op=LOAD Dec 13 02:22:00.474000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:22:00.474000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:22:00.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.549000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:22:00.549000 audit[1410]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffff6e12290 a2=4000 a3=7ffff6e1232c items=0 ppid=1 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:22:00.549000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:22:00.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:56.659377 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:22:00.261005 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:21:56.660958 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:22:00.270729 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:21:56.661046 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:22:00.559930 systemd[1]: Reached target network-pre.target. Dec 13 02:21:56.661132 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:22:00.563000 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:21:56.661150 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:22:00.563851 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:21:56.661234 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:22:00.568189 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:21:56.661256 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:21:56.661869 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:21:56.661925 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:22:00.570769 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:21:56.661999 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:21:56.672340 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:21:56.672411 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:21:56.672444 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:21:56.672469 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:21:56.672500 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:21:56.672634 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:21:59.606941 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:22:00.572016 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:21:59.607186 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:22:00.573652 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:21:59.607289 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:21:59.607469 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:22:00.574805 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:21:59.607541 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:22:00.576909 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:21:59.607657 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:21:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:22:00.580105 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:22:00.592683 kernel: fuse: init (API version 7.34) Dec 13 02:22:00.594877 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:22:00.600646 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:22:00.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.604289 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:22:00.611271 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:22:00.614770 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:22:00.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.616297 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:22:00.622485 systemd-journald[1410]: Time spent on flushing to /var/log/journal/ec2fe16a023dbb17103fa6677f15ac9c is 94.935ms for 1191 entries. Dec 13 02:22:00.622485 systemd-journald[1410]: System Journal (/var/log/journal/ec2fe16a023dbb17103fa6677f15ac9c) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:22:00.735378 systemd-journald[1410]: Received client request to flush runtime journal. Dec 13 02:22:00.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.642345 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:22:00.710249 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:22:00.715689 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:22:00.736807 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:22:00.750451 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:22:00.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.754344 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:22:00.781977 udevadm[1450]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:22:00.829871 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:22:00.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:00.832856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:22:00.938273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:22:00.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:01.547894 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:22:01.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:01.549000 audit: BPF prog-id=18 op=LOAD Dec 13 02:22:01.549000 audit: BPF prog-id=19 op=LOAD Dec 13 02:22:01.549000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:22:01.549000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:22:01.550906 systemd[1]: Starting systemd-udevd.service... Dec 13 02:22:01.570384 systemd-udevd[1453]: Using default interface naming scheme 'v252'. Dec 13 02:22:01.638462 systemd[1]: Started systemd-udevd.service. Dec 13 02:22:01.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:01.640000 audit: BPF prog-id=20 op=LOAD Dec 13 02:22:01.642077 systemd[1]: Starting systemd-networkd.service... Dec 13 02:22:01.682000 audit: BPF prog-id=21 op=LOAD Dec 13 02:22:01.682000 audit: BPF prog-id=22 op=LOAD Dec 13 02:22:01.682000 audit: BPF prog-id=23 op=LOAD Dec 13 02:22:01.684983 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:22:01.718912 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:22:01.749836 (udev-worker)[1457]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:22:01.750942 systemd[1]: Started systemd-userdbd.service. Dec 13 02:22:01.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:01.830603 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:22:01.835533 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:22:01.837571 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 02:22:01.840550 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:22:01.898532 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1464) Dec 13 02:22:01.900706 systemd-networkd[1462]: lo: Link UP Dec 13 02:22:01.900717 systemd-networkd[1462]: lo: Gained carrier Dec 13 02:22:01.901371 systemd-networkd[1462]: Enumeration completed Dec 13 02:22:01.901492 systemd[1]: Started systemd-networkd.service. Dec 13 02:22:01.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:01.904828 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:22:01.907992 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:22:01.914553 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:22:01.914998 systemd-networkd[1462]: eth0: Link UP Dec 13 02:22:01.915497 systemd-networkd[1462]: eth0: Gained carrier Dec 13 02:22:01.926694 systemd-networkd[1462]: eth0: DHCPv4 address 172.31.16.161/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:22:01.926000 audit[1461]: AVC avc: denied { confidentiality } for pid=1461 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:22:01.926000 audit[1461]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a40bd7ec40 a1=337fc a2=7fd370f6ebc5 a3=5 items=110 ppid=1453 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:22:01.926000 audit: CWD cwd="/" Dec 13 02:22:01.926000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=1 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=2 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=3 name=(null) inode=15044 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=4 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=5 name=(null) inode=15045 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=6 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=7 name=(null) inode=15046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=8 name=(null) inode=15046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=9 name=(null) inode=15047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=10 name=(null) inode=15046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=11 name=(null) inode=15048 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=12 name=(null) inode=15046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=13 name=(null) inode=15049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=14 name=(null) inode=15046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=15 name=(null) inode=15050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=16 name=(null) inode=15046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=17 name=(null) inode=15051 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=18 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=19 name=(null) inode=15052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=20 name=(null) inode=15052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=21 name=(null) inode=15053 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=22 name=(null) inode=15052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=23 name=(null) inode=15054 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=24 name=(null) inode=15052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=25 name=(null) inode=15055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=26 name=(null) inode=15052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=27 name=(null) inode=15056 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=28 name=(null) inode=15052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=29 name=(null) inode=15057 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=30 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=31 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=32 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=33 name=(null) inode=15059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=34 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=35 name=(null) inode=15060 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=36 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=37 name=(null) inode=15061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=38 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=39 name=(null) inode=15062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=40 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=41 name=(null) inode=15063 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=42 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=43 name=(null) inode=15064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=44 name=(null) inode=15064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=45 name=(null) inode=15065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=46 name=(null) inode=15064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=47 name=(null) inode=15066 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=48 name=(null) inode=15064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=49 name=(null) inode=15067 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=50 name=(null) inode=15064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=51 name=(null) inode=15068 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=52 name=(null) inode=15064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=53 name=(null) inode=15069 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=55 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=56 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=57 name=(null) inode=15071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=58 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=59 name=(null) inode=15072 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=60 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=61 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=62 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=63 name=(null) inode=15074 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=64 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=65 name=(null) inode=15075 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=66 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=67 name=(null) inode=15076 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=68 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=69 name=(null) inode=15077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=70 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=71 name=(null) inode=15078 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=72 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=73 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=74 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=75 name=(null) inode=15080 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=76 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=77 name=(null) inode=15081 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=78 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=79 name=(null) inode=15082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=80 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=81 name=(null) inode=15083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=82 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=83 name=(null) inode=15084 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=84 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=85 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=86 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=87 name=(null) inode=15086 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=88 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=89 name=(null) inode=15087 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=90 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=91 name=(null) inode=15088 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=92 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=93 name=(null) inode=15089 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=94 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=95 name=(null) inode=15090 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=96 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=97 name=(null) inode=15091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=98 name=(null) inode=15091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=99 name=(null) inode=15092 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=100 name=(null) inode=15091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=101 name=(null) inode=15093 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=102 name=(null) inode=15091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=103 name=(null) inode=15094 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=104 name=(null) inode=15091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=105 name=(null) inode=15095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=106 name=(null) inode=15091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=107 name=(null) inode=15096 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PATH item=109 name=(null) inode=15097 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:01.926000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:22:01.982546 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:22:01.992532 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 02:22:02.001531 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:22:02.100934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:22:02.200233 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:22:02.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.202523 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:22:02.263161 lvm[1567]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:22:02.290740 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:22:02.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.292173 systemd[1]: Reached target cryptsetup.target. Dec 13 02:22:02.294760 systemd[1]: Starting lvm2-activation.service... Dec 13 02:22:02.302253 lvm[1568]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:22:02.340663 systemd[1]: Finished lvm2-activation.service. Dec 13 02:22:02.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.341879 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:22:02.343111 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:22:02.343150 systemd[1]: Reached target local-fs.target. Dec 13 02:22:02.344081 systemd[1]: Reached target machines.target. Dec 13 02:22:02.346327 systemd[1]: Starting ldconfig.service... Dec 13 02:22:02.349071 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:02.349141 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:02.351076 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:22:02.354013 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:22:02.357075 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:22:02.362789 systemd[1]: Starting systemd-sysext.service... Dec 13 02:22:02.376883 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1570 (bootctl) Dec 13 02:22:02.378857 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:22:02.408196 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:22:02.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.422385 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:22:02.438578 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:22:02.438887 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:22:02.458769 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 02:22:02.657541 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:22:02.724532 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 02:22:02.732213 systemd-fsck[1580]: fsck.fat 4.2 (2021-01-31) Dec 13 02:22:02.732213 systemd-fsck[1580]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 02:22:02.734837 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:22:02.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.738410 systemd[1]: Mounting boot.mount... Dec 13 02:22:02.779664 systemd[1]: Mounted boot.mount. Dec 13 02:22:02.781169 (sd-sysext)[1583]: Using extensions 'kubernetes'. Dec 13 02:22:02.781687 (sd-sysext)[1583]: Merged extensions into '/usr'. Dec 13 02:22:02.803077 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:22:02.804396 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:22:02.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.825718 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:02.829082 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:22:02.830582 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:02.833277 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:22:02.836800 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:22:02.840745 systemd[1]: Starting modprobe@loop.service... Dec 13 02:22:02.842301 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:02.842553 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:02.842738 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:02.844293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:22:02.844482 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:22:02.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.847215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:22:02.847395 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:22:02.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.848932 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:22:02.849106 systemd[1]: Finished modprobe@loop.service. Dec 13 02:22:02.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.850669 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:22:02.850826 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:22:02.857080 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:22:02.861649 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:22:02.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.863229 systemd[1]: Finished systemd-sysext.service. Dec 13 02:22:02.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:02.867318 systemd[1]: Starting ensure-sysext.service... Dec 13 02:22:02.869750 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:22:02.880477 systemd[1]: Reloading. Dec 13 02:22:02.891414 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:22:02.897481 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:22:02.904259 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:22:02.993781 /usr/lib/systemd/system-generators/torcx-generator[1622]: time="2024-12-13T02:22:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:22:02.993819 /usr/lib/systemd/system-generators/torcx-generator[1622]: time="2024-12-13T02:22:02Z" level=info msg="torcx already run" Dec 13 02:22:03.204440 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:22:03.204569 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:22:03.229031 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:22:03.323000 audit: BPF prog-id=24 op=LOAD Dec 13 02:22:03.323000 audit: BPF prog-id=25 op=LOAD Dec 13 02:22:03.323000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:22:03.323000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:22:03.327000 audit: BPF prog-id=26 op=LOAD Dec 13 02:22:03.327000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:22:03.328000 audit: BPF prog-id=27 op=LOAD Dec 13 02:22:03.328000 audit: BPF prog-id=28 op=LOAD Dec 13 02:22:03.328000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:22:03.328000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:22:03.329000 audit: BPF prog-id=29 op=LOAD Dec 13 02:22:03.329000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:22:03.329000 audit: BPF prog-id=30 op=LOAD Dec 13 02:22:03.329000 audit: BPF prog-id=31 op=LOAD Dec 13 02:22:03.329000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:22:03.329000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:22:03.332000 audit: BPF prog-id=32 op=LOAD Dec 13 02:22:03.332000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:22:03.339164 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:22:03.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.347275 systemd[1]: Starting audit-rules.service... Dec 13 02:22:03.351356 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:22:03.355209 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:22:03.360000 audit: BPF prog-id=33 op=LOAD Dec 13 02:22:03.362369 systemd[1]: Starting systemd-resolved.service... Dec 13 02:22:03.364000 audit: BPF prog-id=34 op=LOAD Dec 13 02:22:03.366525 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:22:03.371742 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:22:03.377035 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:22:03.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.379636 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:22:03.385334 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:03.386064 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:03.388343 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:22:03.393359 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:22:03.396901 systemd[1]: Starting modprobe@loop.service... Dec 13 02:22:03.398037 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:03.398237 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:03.398413 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:22:03.398560 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:03.404981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:22:03.405274 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:22:03.406953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:22:03.407175 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:22:03.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.409255 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:22:03.413783 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:03.414147 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:03.416613 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:22:03.419683 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:22:03.424831 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:03.425070 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:03.425238 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:22:03.425767 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:03.433849 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:03.433000 audit[1680]: SYSTEM_BOOT pid=1680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.434333 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:03.436960 systemd[1]: Starting modprobe@drm.service... Dec 13 02:22:03.441250 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:03.441835 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:03.442197 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:22:03.442730 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:03.448275 systemd-networkd[1462]: eth0: Gained IPv6LL Dec 13 02:22:03.452933 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:22:03.454758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:22:03.454947 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:22:03.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.457662 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:22:03.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.468484 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:22:03.468788 systemd[1]: Finished modprobe@loop.service. Dec 13 02:22:03.469923 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:22:03.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.470946 systemd[1]: Finished ensure-sysext.service. Dec 13 02:22:03.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.482942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:22:03.483277 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:22:03.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.484551 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:22:03.484950 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:22:03.485113 systemd[1]: Finished modprobe@drm.service. Dec 13 02:22:03.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.532902 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:22:03.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:03.579000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:22:03.579000 audit[1701]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb6193bc0 a2=420 a3=0 items=0 ppid=1675 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:22:03.579000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:22:03.580612 augenrules[1701]: No rules Dec 13 02:22:03.580846 systemd[1]: Finished audit-rules.service. Dec 13 02:22:03.591680 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:22:03.592729 systemd[1]: Reached target time-set.target. Dec 13 02:22:03.613573 systemd-resolved[1678]: Positive Trust Anchors: Dec 13 02:22:03.613731 systemd-resolved[1678]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:22:03.613814 systemd-resolved[1678]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:22:03.661325 systemd-resolved[1678]: Defaulting to hostname 'linux'. Dec 13 02:22:03.664298 systemd-timesyncd[1679]: Contacted time server 138.236.128.36:123 (0.flatcar.pool.ntp.org). Dec 13 02:22:03.664375 systemd-timesyncd[1679]: Initial clock synchronization to Fri 2024-12-13 02:22:03.882863 UTC. Dec 13 02:22:03.664890 systemd[1]: Started systemd-resolved.service. Dec 13 02:22:03.666161 systemd[1]: Reached target network.target. Dec 13 02:22:03.667297 systemd[1]: Reached target network-online.target. Dec 13 02:22:03.668244 systemd[1]: Reached target nss-lookup.target. Dec 13 02:22:03.673719 ldconfig[1569]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:22:03.685725 systemd[1]: Finished ldconfig.service. Dec 13 02:22:03.687992 systemd[1]: Starting systemd-update-done.service... Dec 13 02:22:03.696137 systemd[1]: Finished systemd-update-done.service. Dec 13 02:22:03.698729 systemd[1]: Reached target sysinit.target. Dec 13 02:22:03.703779 systemd[1]: Started motdgen.path. Dec 13 02:22:03.705288 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:22:03.706730 systemd[1]: Started logrotate.timer. Dec 13 02:22:03.708174 systemd[1]: Started mdadm.timer. Dec 13 02:22:03.709577 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:22:03.710761 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:22:03.710807 systemd[1]: Reached target paths.target. Dec 13 02:22:03.711877 systemd[1]: Reached target timers.target. Dec 13 02:22:03.713499 systemd[1]: Listening on dbus.socket. Dec 13 02:22:03.716076 systemd[1]: Starting docker.socket... Dec 13 02:22:03.722441 systemd[1]: Listening on sshd.socket. Dec 13 02:22:03.723494 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:03.724189 systemd[1]: Listening on docker.socket. Dec 13 02:22:03.725409 systemd[1]: Reached target sockets.target. Dec 13 02:22:03.726220 systemd[1]: Reached target basic.target. Dec 13 02:22:03.727465 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:22:03.727505 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:22:03.729102 systemd[1]: Started amazon-ssm-agent.service. Dec 13 02:22:03.731758 systemd[1]: Starting containerd.service... Dec 13 02:22:03.735097 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:22:03.738852 systemd[1]: Starting dbus.service... Dec 13 02:22:03.741611 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:22:03.745349 systemd[1]: Starting extend-filesystems.service... Dec 13 02:22:03.746488 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:22:03.750316 systemd[1]: Starting kubelet.service... Dec 13 02:22:03.752863 systemd[1]: Starting motdgen.service... Dec 13 02:22:03.760442 systemd[1]: Started nvidia.service. Dec 13 02:22:03.768344 systemd[1]: Starting prepare-helm.service... Dec 13 02:22:03.772782 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:22:03.776956 systemd[1]: Starting sshd-keygen.service... Dec 13 02:22:03.783397 systemd[1]: Starting systemd-logind.service... Dec 13 02:22:03.784440 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:03.784543 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:22:03.785628 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:22:03.794282 systemd[1]: Starting update-engine.service... Dec 13 02:22:03.804861 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:22:03.931336 jq[1724]: true Dec 13 02:22:03.935025 jq[1714]: false Dec 13 02:22:03.936089 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:22:03.936423 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:22:03.944685 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:22:03.944989 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:22:03.953395 tar[1727]: linux-amd64/helm Dec 13 02:22:04.019755 jq[1732]: true Dec 13 02:22:04.024824 extend-filesystems[1715]: Found loop1 Dec 13 02:22:04.024824 extend-filesystems[1715]: Found nvme0n1 Dec 13 02:22:04.024824 extend-filesystems[1715]: Found nvme0n1p1 Dec 13 02:22:04.024824 extend-filesystems[1715]: Found nvme0n1p2 Dec 13 02:22:04.024824 extend-filesystems[1715]: Found nvme0n1p3 Dec 13 02:22:04.024824 extend-filesystems[1715]: Found usr Dec 13 02:22:04.024824 extend-filesystems[1715]: Found nvme0n1p4 Dec 13 02:22:04.024824 extend-filesystems[1715]: Found nvme0n1p6 Dec 13 02:22:04.024824 extend-filesystems[1715]: Found nvme0n1p7 Dec 13 02:22:04.024824 extend-filesystems[1715]: Found nvme0n1p9 Dec 13 02:22:04.024824 extend-filesystems[1715]: Checking size of /dev/nvme0n1p9 Dec 13 02:22:04.048152 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:22:04.048374 systemd[1]: Finished motdgen.service. Dec 13 02:22:04.085961 amazon-ssm-agent[1710]: 2024/12/13 02:22:04 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 02:22:04.085769 systemd[1]: Started dbus.service. Dec 13 02:22:04.085326 dbus-daemon[1713]: [system] SELinux support is enabled Dec 13 02:22:04.090608 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:22:04.090655 systemd[1]: Reached target system-config.target. Dec 13 02:22:04.091735 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:22:04.091761 systemd[1]: Reached target user-config.target. Dec 13 02:22:04.092981 dbus-daemon[1713]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1462 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:22:04.094380 dbus-daemon[1713]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:22:04.100486 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:22:04.115007 amazon-ssm-agent[1710]: Initializing new seelog logger Dec 13 02:22:04.115233 amazon-ssm-agent[1710]: New Seelog Logger Creation Complete Dec 13 02:22:04.115556 amazon-ssm-agent[1710]: 2024/12/13 02:22:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:22:04.115556 amazon-ssm-agent[1710]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:22:04.115710 amazon-ssm-agent[1710]: 2024/12/13 02:22:04 processing appconfig overrides Dec 13 02:22:04.143138 extend-filesystems[1715]: Resized partition /dev/nvme0n1p9 Dec 13 02:22:04.174322 extend-filesystems[1779]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:22:04.195578 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 02:22:04.290168 update_engine[1723]: I1213 02:22:04.288745 1723 main.cc:92] Flatcar Update Engine starting Dec 13 02:22:04.307434 systemd[1]: Started update-engine.service. Dec 13 02:22:04.324986 update_engine[1723]: I1213 02:22:04.307721 1723 update_check_scheduler.cc:74] Next update check in 11m47s Dec 13 02:22:04.311093 systemd[1]: Started locksmithd.service. Dec 13 02:22:04.328030 env[1728]: time="2024-12-13T02:22:04.327949676Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:22:04.346591 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 02:22:04.400360 systemd-logind[1722]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:22:04.401043 systemd-logind[1722]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:22:04.401194 systemd-logind[1722]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:22:04.405281 extend-filesystems[1779]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 02:22:04.405281 extend-filesystems[1779]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:22:04.405281 extend-filesystems[1779]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 02:22:04.405027 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:22:04.415771 extend-filesystems[1715]: Resized filesystem in /dev/nvme0n1p9 Dec 13 02:22:04.405257 systemd[1]: Finished extend-filesystems.service. Dec 13 02:22:04.411152 systemd-logind[1722]: New seat seat0. Dec 13 02:22:04.419810 systemd[1]: Started systemd-logind.service. Dec 13 02:22:04.428586 bash[1786]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:22:04.433753 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:22:04.446152 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 02:22:04.476489 env[1728]: time="2024-12-13T02:22:04.476432390Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:22:04.476837 env[1728]: time="2024-12-13T02:22:04.476813999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:04.482747 env[1728]: time="2024-12-13T02:22:04.482637716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:22:04.482933 env[1728]: time="2024-12-13T02:22:04.482907846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:04.483294 env[1728]: time="2024-12-13T02:22:04.483271969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:22:04.483383 env[1728]: time="2024-12-13T02:22:04.483366438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:04.483463 env[1728]: time="2024-12-13T02:22:04.483446209Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:22:04.483553 env[1728]: time="2024-12-13T02:22:04.483521964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:04.483722 env[1728]: time="2024-12-13T02:22:04.483704606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:04.484066 env[1728]: time="2024-12-13T02:22:04.484043752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:04.487484 env[1728]: time="2024-12-13T02:22:04.487427660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:22:04.487748 env[1728]: time="2024-12-13T02:22:04.487725049Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:22:04.488058 env[1728]: time="2024-12-13T02:22:04.488034426Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:22:04.488319 env[1728]: time="2024-12-13T02:22:04.488297848Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.495647867Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.495700439Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.495720294Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496145160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496172539Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496247970Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496271317Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496291732Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496308381Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496327375Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496343429Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496405704Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496585912Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:22:04.497357 env[1728]: time="2024-12-13T02:22:04.496688523Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:22:04.498907 env[1728]: time="2024-12-13T02:22:04.497212378Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:22:04.498907 env[1728]: time="2024-12-13T02:22:04.497266775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.498907 env[1728]: time="2024-12-13T02:22:04.497288066Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:22:04.498907 env[1728]: time="2024-12-13T02:22:04.497579758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.497607257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499187480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499210538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499230607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499250948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499269148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499290328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499311913Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499482742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499504170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499523566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499554789Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499585452Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:22:04.500422 env[1728]: time="2024-12-13T02:22:04.499603497Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:22:04.501059 env[1728]: time="2024-12-13T02:22:04.499627824Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:22:04.501059 env[1728]: time="2024-12-13T02:22:04.499673700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:22:04.501198 env[1728]: time="2024-12-13T02:22:04.499952905Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:22:04.501198 env[1728]: time="2024-12-13T02:22:04.500033921Z" level=info msg="Connect containerd service" Dec 13 02:22:04.501198 env[1728]: time="2024-12-13T02:22:04.500076005Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.501828526Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.502291755Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.502345219Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.502408918Z" level=info msg="containerd successfully booted in 0.305939s" Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.504618525Z" level=info msg="Start subscribing containerd event" Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.504690897Z" level=info msg="Start recovering state" Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.504782027Z" level=info msg="Start event monitor" Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.504808448Z" level=info msg="Start snapshots syncer" Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.504824440Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:22:04.513371 env[1728]: time="2024-12-13T02:22:04.504901714Z" level=info msg="Start streaming server" Dec 13 02:22:04.502499 systemd[1]: Started containerd.service. Dec 13 02:22:04.548780 dbus-daemon[1713]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:22:04.548971 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:22:04.550977 dbus-daemon[1713]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1763 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:22:04.556049 systemd[1]: Starting polkit.service... Dec 13 02:22:04.597783 polkitd[1843]: Started polkitd version 121 Dec 13 02:22:04.624414 polkitd[1843]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:22:04.624683 polkitd[1843]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:22:04.638824 polkitd[1843]: Finished loading, compiling and executing 2 rules Dec 13 02:22:04.639742 dbus-daemon[1713]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:22:04.640041 systemd[1]: Started polkit.service. Dec 13 02:22:04.640948 polkitd[1843]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:22:04.678197 systemd-hostnamed[1763]: Hostname set to (transient) Dec 13 02:22:04.678198 systemd-resolved[1678]: System hostname changed to 'ip-172-31-16-161'. Dec 13 02:22:04.907678 coreos-metadata[1712]: Dec 13 02:22:04.904 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 02:22:04.918010 coreos-metadata[1712]: Dec 13 02:22:04.917 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 02:22:04.918845 coreos-metadata[1712]: Dec 13 02:22:04.918 INFO Fetch successful Dec 13 02:22:04.919070 coreos-metadata[1712]: Dec 13 02:22:04.919 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:22:04.919710 coreos-metadata[1712]: Dec 13 02:22:04.919 INFO Fetch successful Dec 13 02:22:04.923998 unknown[1712]: wrote ssh authorized keys file for user: core Dec 13 02:22:04.964184 update-ssh-keys[1889]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:22:04.964584 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:22:05.099250 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Create new startup processor Dec 13 02:22:05.102273 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 02:22:05.102435 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Initializing bookkeeping folders Dec 13 02:22:05.102667 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO removing the completed state files Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Initializing bookkeeping folders for long running plugins Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Initializing healthcheck folders for long running plugins Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Initializing locations for inventory plugin Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Initializing default location for custom inventory Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Initializing default location for file inventory Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Initializing default location for role inventory Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Init the cloudwatchlogs publisher Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform independent plugin aws:runDocument Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform independent plugin aws:configurePackage Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform independent plugin aws:downloadContent Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform independent plugin aws:configureDocker Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 02:22:05.113155 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO OS: linux, Arch: amd64 Dec 13 02:22:05.114005 amazon-ssm-agent[1710]: datastore file /var/lib/amazon/ssm/i-048c5bd41dfa04a7e/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 02:22:05.200907 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 02:22:05.296907 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 02:22:05.322118 sshd_keygen[1745]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:22:05.390137 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 02:22:05.392630 systemd[1]: Finished sshd-keygen.service. Dec 13 02:22:05.397483 systemd[1]: Starting issuegen.service... Dec 13 02:22:05.414658 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:22:05.414885 systemd[1]: Finished issuegen.service. Dec 13 02:22:05.418852 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:22:05.431017 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:22:05.434601 systemd[1]: Started getty@tty1.service. Dec 13 02:22:05.438227 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:22:05.439569 systemd[1]: Reached target getty.target. Dec 13 02:22:05.485709 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-048c5bd41dfa04a7e, requestId: 4c4420f3-4615-4604-8087-ef1aae950069 Dec 13 02:22:05.579602 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessageGatewayService] listening reply. Dec 13 02:22:05.653712 tar[1727]: linux-amd64/LICENSE Dec 13 02:22:05.654058 tar[1727]: linux-amd64/README.md Dec 13 02:22:05.659452 systemd[1]: Finished prepare-helm.service. Dec 13 02:22:05.672223 locksmithd[1801]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:22:05.674331 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [OfflineService] Starting document processing engine... Dec 13 02:22:05.769532 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [OfflineService] [EngineProcessor] Starting Dec 13 02:22:05.864894 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 02:22:05.960435 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [OfflineService] Starting message polling Dec 13 02:22:06.056208 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [OfflineService] Starting send replies to MDS Dec 13 02:22:06.152323 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 02:22:06.248344 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 02:22:06.344966 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 02:22:06.441498 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 02:22:06.538224 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 02:22:06.635204 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] Starting message polling Dec 13 02:22:06.732377 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 02:22:06.813006 systemd[1]: Started kubelet.service. Dec 13 02:22:06.814865 systemd[1]: Reached target multi-user.target. Dec 13 02:22:06.819618 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:22:06.829997 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [instanceID=i-048c5bd41dfa04a7e] Starting association polling Dec 13 02:22:06.832985 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:22:06.833218 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:22:06.834933 systemd[1]: Startup finished in 655ms (kernel) + 7.408s (initrd) + 10.632s (userspace) = 18.697s. Dec 13 02:22:06.927467 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 02:22:07.025092 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 02:22:07.123330 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 02:22:07.221369 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 02:22:07.319710 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 02:22:07.418204 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:22:07.516910 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 02:22:07.615719 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [StartupProcessor] Executing startup processor tasks Dec 13 02:22:07.714646 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 02:22:07.813866 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 02:22:07.913432 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 02:22:08.012920 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-048c5bd41dfa04a7e?role=subscribe&stream=input Dec 13 02:22:08.064180 kubelet[1926]: E1213 02:22:08.064127 1926 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:22:08.065957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:22:08.066138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:22:08.066458 systemd[1]: kubelet.service: Consumed 1.056s CPU time. Dec 13 02:22:08.112963 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-048c5bd41dfa04a7e?role=subscribe&stream=input Dec 13 02:22:08.213456 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 02:22:08.314680 amazon-ssm-agent[1710]: 2024-12-13 02:22:05 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 02:22:12.588279 systemd[1]: Created slice system-sshd.slice. Dec 13 02:22:12.591168 systemd[1]: Started sshd@0-172.31.16.161:22-139.178.68.195:40436.service. Dec 13 02:22:12.782755 sshd[1933]: Accepted publickey for core from 139.178.68.195 port 40436 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:12.786422 sshd[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:12.802640 systemd[1]: Created slice user-500.slice. Dec 13 02:22:12.804434 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:22:12.808158 systemd-logind[1722]: New session 1 of user core. Dec 13 02:22:12.819384 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:22:12.821866 systemd[1]: Starting user@500.service... Dec 13 02:22:12.827137 (systemd)[1936]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:12.959092 systemd[1936]: Queued start job for default target default.target. Dec 13 02:22:12.959879 systemd[1936]: Reached target paths.target. Dec 13 02:22:12.959910 systemd[1936]: Reached target sockets.target. Dec 13 02:22:12.959928 systemd[1936]: Reached target timers.target. Dec 13 02:22:12.959946 systemd[1936]: Reached target basic.target. Dec 13 02:22:12.960003 systemd[1936]: Reached target default.target. Dec 13 02:22:12.960046 systemd[1936]: Startup finished in 124ms. Dec 13 02:22:12.960853 systemd[1]: Started user@500.service. Dec 13 02:22:12.962410 systemd[1]: Started session-1.scope. Dec 13 02:22:13.117729 systemd[1]: Started sshd@1-172.31.16.161:22-139.178.68.195:40452.service. Dec 13 02:22:13.270054 sshd[1945]: Accepted publickey for core from 139.178.68.195 port 40452 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:13.271553 sshd[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:13.276166 systemd-logind[1722]: New session 2 of user core. Dec 13 02:22:13.277678 systemd[1]: Started session-2.scope. Dec 13 02:22:13.403101 sshd[1945]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:13.407031 systemd[1]: sshd@1-172.31.16.161:22-139.178.68.195:40452.service: Deactivated successfully. Dec 13 02:22:13.407971 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:22:13.411004 systemd-logind[1722]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:22:13.413169 systemd-logind[1722]: Removed session 2. Dec 13 02:22:13.428891 systemd[1]: Started sshd@2-172.31.16.161:22-139.178.68.195:40466.service. Dec 13 02:22:13.588570 sshd[1951]: Accepted publickey for core from 139.178.68.195 port 40466 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:13.590270 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:13.595110 systemd-logind[1722]: New session 3 of user core. Dec 13 02:22:13.595742 systemd[1]: Started session-3.scope. Dec 13 02:22:13.717094 sshd[1951]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:13.720105 systemd[1]: sshd@2-172.31.16.161:22-139.178.68.195:40466.service: Deactivated successfully. Dec 13 02:22:13.720964 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:22:13.721703 systemd-logind[1722]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:22:13.722587 systemd-logind[1722]: Removed session 3. Dec 13 02:22:13.747270 systemd[1]: Started sshd@3-172.31.16.161:22-139.178.68.195:40482.service. Dec 13 02:22:13.920272 sshd[1957]: Accepted publickey for core from 139.178.68.195 port 40482 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:13.921267 sshd[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:13.926592 systemd[1]: Started session-4.scope. Dec 13 02:22:13.927301 systemd-logind[1722]: New session 4 of user core. Dec 13 02:22:14.054226 sshd[1957]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:14.057256 systemd[1]: sshd@3-172.31.16.161:22-139.178.68.195:40482.service: Deactivated successfully. Dec 13 02:22:14.058111 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:22:14.058825 systemd-logind[1722]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:22:14.059825 systemd-logind[1722]: Removed session 4. Dec 13 02:22:14.079345 systemd[1]: Started sshd@4-172.31.16.161:22-139.178.68.195:40486.service. Dec 13 02:22:14.240368 sshd[1963]: Accepted publickey for core from 139.178.68.195 port 40486 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:14.241368 sshd[1963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:14.246727 systemd[1]: Started session-5.scope. Dec 13 02:22:14.247344 systemd-logind[1722]: New session 5 of user core. Dec 13 02:22:14.380906 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:22:14.381276 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:22:14.408249 systemd[1]: Starting docker.service... Dec 13 02:22:14.453004 env[1976]: time="2024-12-13T02:22:14.452963404Z" level=info msg="Starting up" Dec 13 02:22:14.454489 env[1976]: time="2024-12-13T02:22:14.454462597Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:22:14.455097 env[1976]: time="2024-12-13T02:22:14.455077831Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:22:14.455205 env[1976]: time="2024-12-13T02:22:14.455191038Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:22:14.455253 env[1976]: time="2024-12-13T02:22:14.455245590Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:22:14.456987 env[1976]: time="2024-12-13T02:22:14.456967830Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:22:14.457079 env[1976]: time="2024-12-13T02:22:14.457068252Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:22:14.457134 env[1976]: time="2024-12-13T02:22:14.457121849Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:22:14.457289 env[1976]: time="2024-12-13T02:22:14.457276453Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:22:14.470341 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport740635742-merged.mount: Deactivated successfully. Dec 13 02:22:14.544817 env[1976]: time="2024-12-13T02:22:14.544701803Z" level=info msg="Loading containers: start." Dec 13 02:22:14.767538 kernel: Initializing XFRM netlink socket Dec 13 02:22:14.835028 env[1976]: time="2024-12-13T02:22:14.834667326Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:22:14.836061 (udev-worker)[1987]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:22:14.965440 systemd-networkd[1462]: docker0: Link UP Dec 13 02:22:14.986908 env[1976]: time="2024-12-13T02:22:14.986846238Z" level=info msg="Loading containers: done." Dec 13 02:22:15.006864 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2636433768-merged.mount: Deactivated successfully. Dec 13 02:22:15.014770 env[1976]: time="2024-12-13T02:22:15.014727014Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:22:15.015038 env[1976]: time="2024-12-13T02:22:15.015008445Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:22:15.015160 env[1976]: time="2024-12-13T02:22:15.015136187Z" level=info msg="Daemon has completed initialization" Dec 13 02:22:15.039905 systemd[1]: Started docker.service. Dec 13 02:22:15.049644 env[1976]: time="2024-12-13T02:22:15.049577660Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:22:16.316411 env[1728]: time="2024-12-13T02:22:16.316366237Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 02:22:16.972235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount577668890.mount: Deactivated successfully. Dec 13 02:22:18.171188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:22:18.171451 systemd[1]: Stopped kubelet.service. Dec 13 02:22:18.171529 systemd[1]: kubelet.service: Consumed 1.056s CPU time. Dec 13 02:22:18.174654 systemd[1]: Starting kubelet.service... Dec 13 02:22:18.425565 systemd[1]: Started kubelet.service. Dec 13 02:22:18.488530 kubelet[2102]: E1213 02:22:18.488466 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:22:18.492278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:22:18.492447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:22:19.850591 env[1728]: time="2024-12-13T02:22:19.850254093Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:19.853467 env[1728]: time="2024-12-13T02:22:19.853300233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:19.855830 env[1728]: time="2024-12-13T02:22:19.855795057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:19.857915 env[1728]: time="2024-12-13T02:22:19.857881009Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:19.858854 env[1728]: time="2024-12-13T02:22:19.858818615Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 02:22:19.861167 env[1728]: time="2024-12-13T02:22:19.861137829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 02:22:22.282732 env[1728]: time="2024-12-13T02:22:22.282679572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:22.285469 env[1728]: time="2024-12-13T02:22:22.285422364Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:22.288481 env[1728]: time="2024-12-13T02:22:22.288439063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:22.290326 env[1728]: time="2024-12-13T02:22:22.290286461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:22.291010 env[1728]: time="2024-12-13T02:22:22.290971334Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 02:22:22.291780 env[1728]: time="2024-12-13T02:22:22.291753404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 02:22:24.204336 env[1728]: time="2024-12-13T02:22:24.204280756Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:24.210962 env[1728]: time="2024-12-13T02:22:24.210916937Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:24.213189 env[1728]: time="2024-12-13T02:22:24.213145663Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:24.215290 env[1728]: time="2024-12-13T02:22:24.215255261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:24.216086 env[1728]: time="2024-12-13T02:22:24.216047561Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 02:22:24.216755 env[1728]: time="2024-12-13T02:22:24.216730820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 02:22:25.477001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531162099.mount: Deactivated successfully. Dec 13 02:22:26.261540 env[1728]: time="2024-12-13T02:22:26.261432363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:26.268702 env[1728]: time="2024-12-13T02:22:26.268642955Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:26.271407 env[1728]: time="2024-12-13T02:22:26.271347568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:26.274575 env[1728]: time="2024-12-13T02:22:26.274539780Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:26.275816 env[1728]: time="2024-12-13T02:22:26.275645658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 02:22:26.276877 env[1728]: time="2024-12-13T02:22:26.276844159Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:22:26.905843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521766103.mount: Deactivated successfully. Dec 13 02:22:28.372519 env[1728]: time="2024-12-13T02:22:28.372453304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:28.375277 env[1728]: time="2024-12-13T02:22:28.375238054Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:28.377815 env[1728]: time="2024-12-13T02:22:28.377776131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:28.380327 env[1728]: time="2024-12-13T02:22:28.380290279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:28.381352 env[1728]: time="2024-12-13T02:22:28.381316752Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:22:28.382057 env[1728]: time="2024-12-13T02:22:28.382028835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 02:22:28.671303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:22:28.671623 systemd[1]: Stopped kubelet.service. Dec 13 02:22:28.674030 systemd[1]: Starting kubelet.service... Dec 13 02:22:28.917837 systemd[1]: Started kubelet.service. Dec 13 02:22:28.970890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796230361.mount: Deactivated successfully. Dec 13 02:22:28.982483 env[1728]: time="2024-12-13T02:22:28.982436073Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:28.987076 env[1728]: time="2024-12-13T02:22:28.987030502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:28.989947 env[1728]: time="2024-12-13T02:22:28.989637177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:28.993421 env[1728]: time="2024-12-13T02:22:28.992130010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:28.993421 env[1728]: time="2024-12-13T02:22:28.992729935Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 02:22:28.993679 env[1728]: time="2024-12-13T02:22:28.993641661Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 02:22:28.999228 kubelet[2111]: E1213 02:22:28.999191 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:22:29.001321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:22:29.001490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:22:29.562002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount861943884.mount: Deactivated successfully. Dec 13 02:22:29.600826 amazon-ssm-agent[1710]: 2024-12-13 02:22:29 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 02:22:32.171892 env[1728]: time="2024-12-13T02:22:32.171836295Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:32.174948 env[1728]: time="2024-12-13T02:22:32.174903753Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:32.177247 env[1728]: time="2024-12-13T02:22:32.177208199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:32.179258 env[1728]: time="2024-12-13T02:22:32.179222739Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:32.180251 env[1728]: time="2024-12-13T02:22:32.180211510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 02:22:34.694250 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:22:36.343861 systemd[1]: Stopped kubelet.service. Dec 13 02:22:36.348476 systemd[1]: Starting kubelet.service... Dec 13 02:22:36.387049 systemd[1]: Reloading. Dec 13 02:22:36.531950 /usr/lib/systemd/system-generators/torcx-generator[2164]: time="2024-12-13T02:22:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:22:36.531995 /usr/lib/systemd/system-generators/torcx-generator[2164]: time="2024-12-13T02:22:36Z" level=info msg="torcx already run" Dec 13 02:22:36.675868 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:22:36.676085 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:22:36.704891 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:22:36.935605 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:22:36.935706 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:22:36.935957 systemd[1]: Stopped kubelet.service. Dec 13 02:22:36.938181 systemd[1]: Starting kubelet.service... Dec 13 02:22:37.128130 systemd[1]: Started kubelet.service. Dec 13 02:22:37.196597 kubelet[2221]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:22:37.196597 kubelet[2221]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:22:37.196597 kubelet[2221]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:22:37.196597 kubelet[2221]: I1213 02:22:37.194631 2221 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:22:37.516935 kubelet[2221]: I1213 02:22:37.516422 2221 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:22:37.518060 kubelet[2221]: I1213 02:22:37.518037 2221 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:22:37.518622 kubelet[2221]: I1213 02:22:37.518601 2221 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:22:37.614505 kubelet[2221]: I1213 02:22:37.614474 2221 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:22:37.614975 kubelet[2221]: E1213 02:22:37.614941 2221 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:37.627811 kubelet[2221]: E1213 02:22:37.627767 2221 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:22:37.627811 kubelet[2221]: I1213 02:22:37.627800 2221 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:22:37.637909 kubelet[2221]: I1213 02:22:37.636238 2221 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:22:37.638081 kubelet[2221]: I1213 02:22:37.638019 2221 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:22:37.638235 kubelet[2221]: I1213 02:22:37.638199 2221 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:22:37.638436 kubelet[2221]: I1213 02:22:37.638232 2221 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-161","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:22:37.638604 kubelet[2221]: I1213 02:22:37.638443 2221 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:22:37.638604 kubelet[2221]: I1213 02:22:37.638457 2221 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:22:37.638690 kubelet[2221]: I1213 02:22:37.638606 2221 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:22:37.652138 kubelet[2221]: I1213 02:22:37.652089 2221 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:22:37.652138 kubelet[2221]: I1213 02:22:37.652148 2221 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:22:37.652353 kubelet[2221]: I1213 02:22:37.652202 2221 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:22:37.652353 kubelet[2221]: I1213 02:22:37.652220 2221 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:22:37.675957 kubelet[2221]: W1213 02:22:37.675884 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-161&limit=500&resourceVersion=0": dial tcp 172.31.16.161:6443: connect: connection refused Dec 13 02:22:37.675957 kubelet[2221]: E1213 02:22:37.675956 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-161&limit=500&resourceVersion=0\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:37.681322 kubelet[2221]: W1213 02:22:37.681220 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.161:6443: connect: connection refused Dec 13 02:22:37.681556 kubelet[2221]: E1213 02:22:37.681533 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:37.681753 kubelet[2221]: I1213 02:22:37.681739 2221 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:22:37.688879 kubelet[2221]: I1213 02:22:37.688843 2221 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:22:37.691076 kubelet[2221]: W1213 02:22:37.691046 2221 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:22:37.693369 kubelet[2221]: I1213 02:22:37.693350 2221 server.go:1269] "Started kubelet" Dec 13 02:22:37.699789 kubelet[2221]: I1213 02:22:37.699742 2221 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:22:37.718127 kubelet[2221]: I1213 02:22:37.715228 2221 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:22:37.719941 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:22:37.720048 kubelet[2221]: I1213 02:22:37.718444 2221 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:22:37.720788 kubelet[2221]: I1213 02:22:37.720771 2221 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:22:37.721963 kubelet[2221]: E1213 02:22:37.718948 2221 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.161:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.161:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-161.18109b4075bf1f9e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-161,UID:ip-172-31-16-161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-161,},FirstTimestamp:2024-12-13 02:22:37.693312926 +0000 UTC m=+0.558212490,LastTimestamp:2024-12-13 02:22:37.693312926 +0000 UTC m=+0.558212490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-161,}" Dec 13 02:22:37.725571 kubelet[2221]: E1213 02:22:37.725540 2221 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:22:37.727500 kubelet[2221]: I1213 02:22:37.727207 2221 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:22:37.728911 kubelet[2221]: I1213 02:22:37.728887 2221 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:22:37.729936 kubelet[2221]: I1213 02:22:37.729920 2221 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:22:37.730307 kubelet[2221]: E1213 02:22:37.730289 2221 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-161\" not found" Dec 13 02:22:37.733766 kubelet[2221]: I1213 02:22:37.733750 2221 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:22:37.734453 kubelet[2221]: I1213 02:22:37.734438 2221 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:22:37.734580 kubelet[2221]: E1213 02:22:37.734480 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-161?timeout=10s\": dial tcp 172.31.16.161:6443: connect: connection refused" interval="200ms" Dec 13 02:22:37.737026 kubelet[2221]: I1213 02:22:37.737001 2221 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:22:37.737277 kubelet[2221]: I1213 02:22:37.737027 2221 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:22:37.737334 kubelet[2221]: I1213 02:22:37.737288 2221 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:22:37.752204 kubelet[2221]: I1213 02:22:37.751782 2221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:22:37.753631 kubelet[2221]: I1213 02:22:37.753598 2221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:22:37.753766 kubelet[2221]: I1213 02:22:37.753638 2221 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:22:37.753766 kubelet[2221]: I1213 02:22:37.753668 2221 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:22:37.753766 kubelet[2221]: E1213 02:22:37.753719 2221 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:22:37.771109 kubelet[2221]: W1213 02:22:37.769057 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.161:6443: connect: connection refused Dec 13 02:22:37.771109 kubelet[2221]: E1213 02:22:37.769142 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:37.771109 kubelet[2221]: W1213 02:22:37.769595 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.161:6443: connect: connection refused Dec 13 02:22:37.771109 kubelet[2221]: E1213 02:22:37.769660 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:37.783590 kubelet[2221]: I1213 02:22:37.783570 2221 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:22:37.783794 kubelet[2221]: I1213 02:22:37.783782 2221 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:22:37.783942 kubelet[2221]: I1213 02:22:37.783934 2221 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:22:37.786419 kubelet[2221]: I1213 02:22:37.786400 2221 policy_none.go:49] "None policy: Start" Dec 13 02:22:37.787593 kubelet[2221]: I1213 02:22:37.787576 2221 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:22:37.787807 kubelet[2221]: I1213 02:22:37.787797 2221 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:22:37.802490 systemd[1]: Created slice kubepods.slice. Dec 13 02:22:37.809034 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:22:37.814023 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:22:37.820998 kubelet[2221]: I1213 02:22:37.820947 2221 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:22:37.821188 kubelet[2221]: I1213 02:22:37.821177 2221 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:22:37.821919 kubelet[2221]: I1213 02:22:37.821190 2221 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:22:37.823716 kubelet[2221]: I1213 02:22:37.822599 2221 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:22:37.825821 kubelet[2221]: E1213 02:22:37.825796 2221 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-161\" not found" Dec 13 02:22:37.869117 systemd[1]: Created slice kubepods-burstable-pod07edeee396908ecdd75d43fb9c6153b4.slice. Dec 13 02:22:37.882469 systemd[1]: Created slice kubepods-burstable-podf488e8ce742499500acfe3873079c901.slice. Dec 13 02:22:37.891483 systemd[1]: Created slice kubepods-burstable-podf8f867d8de33f662a5756d106fd2950d.slice. Dec 13 02:22:37.923784 kubelet[2221]: I1213 02:22:37.923736 2221 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-161" Dec 13 02:22:37.924179 kubelet[2221]: E1213 02:22:37.924145 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.161:6443/api/v1/nodes\": dial tcp 172.31.16.161:6443: connect: connection refused" node="ip-172-31-16-161" Dec 13 02:22:37.936406 kubelet[2221]: E1213 02:22:37.936354 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-161?timeout=10s\": dial tcp 172.31.16.161:6443: connect: connection refused" interval="400ms" Dec 13 02:22:37.936406 kubelet[2221]: I1213 02:22:37.936372 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8f867d8de33f662a5756d106fd2950d-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-161\" (UID: \"f8f867d8de33f662a5756d106fd2950d\") " pod="kube-system/kube-apiserver-ip-172-31-16-161" Dec 13 02:22:37.936644 kubelet[2221]: I1213 02:22:37.936422 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:37.936644 kubelet[2221]: I1213 02:22:37.936449 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:37.936644 kubelet[2221]: I1213 02:22:37.936471 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:37.936644 kubelet[2221]: I1213 02:22:37.936492 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8f867d8de33f662a5756d106fd2950d-ca-certs\") pod \"kube-apiserver-ip-172-31-16-161\" (UID: \"f8f867d8de33f662a5756d106fd2950d\") " pod="kube-system/kube-apiserver-ip-172-31-16-161" Dec 13 02:22:37.936644 kubelet[2221]: I1213 02:22:37.936529 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8f867d8de33f662a5756d106fd2950d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-161\" (UID: \"f8f867d8de33f662a5756d106fd2950d\") " pod="kube-system/kube-apiserver-ip-172-31-16-161" Dec 13 02:22:37.936829 kubelet[2221]: I1213 02:22:37.936551 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:37.936829 kubelet[2221]: I1213 02:22:37.936572 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:37.936829 kubelet[2221]: I1213 02:22:37.936637 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f488e8ce742499500acfe3873079c901-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-161\" (UID: \"f488e8ce742499500acfe3873079c901\") " pod="kube-system/kube-scheduler-ip-172-31-16-161" Dec 13 02:22:38.128068 kubelet[2221]: I1213 02:22:38.126239 2221 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-161" Dec 13 02:22:38.128401 kubelet[2221]: E1213 02:22:38.128363 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.161:6443/api/v1/nodes\": dial tcp 172.31.16.161:6443: connect: connection refused" node="ip-172-31-16-161" Dec 13 02:22:38.180153 env[1728]: time="2024-12-13T02:22:38.180104299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-161,Uid:07edeee396908ecdd75d43fb9c6153b4,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:38.188774 env[1728]: time="2024-12-13T02:22:38.188732507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-161,Uid:f488e8ce742499500acfe3873079c901,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:38.197052 env[1728]: time="2024-12-13T02:22:38.197007961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-161,Uid:f8f867d8de33f662a5756d106fd2950d,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:38.337771 kubelet[2221]: E1213 02:22:38.337718 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-161?timeout=10s\": dial tcp 172.31.16.161:6443: connect: connection refused" interval="800ms" Dec 13 02:22:38.530844 kubelet[2221]: I1213 02:22:38.530809 2221 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-161" Dec 13 02:22:38.531195 kubelet[2221]: E1213 02:22:38.531165 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.161:6443/api/v1/nodes\": dial tcp 172.31.16.161:6443: connect: connection refused" node="ip-172-31-16-161" Dec 13 02:22:38.656555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995227774.mount: Deactivated successfully. Dec 13 02:22:38.666201 env[1728]: time="2024-12-13T02:22:38.666152665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.668106 env[1728]: time="2024-12-13T02:22:38.668068154Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.671580 env[1728]: time="2024-12-13T02:22:38.671540152Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.672590 env[1728]: time="2024-12-13T02:22:38.672559431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.674943 env[1728]: time="2024-12-13T02:22:38.674906503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.675952 env[1728]: time="2024-12-13T02:22:38.675865348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.685262 env[1728]: time="2024-12-13T02:22:38.685214599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.687266 env[1728]: time="2024-12-13T02:22:38.687172395Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.691226 env[1728]: time="2024-12-13T02:22:38.691185961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.692298 env[1728]: time="2024-12-13T02:22:38.692265955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.696493 env[1728]: time="2024-12-13T02:22:38.696442858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.700985 env[1728]: time="2024-12-13T02:22:38.700946821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:38.719977 env[1728]: time="2024-12-13T02:22:38.719890373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:38.720231 env[1728]: time="2024-12-13T02:22:38.719941507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:38.720231 env[1728]: time="2024-12-13T02:22:38.719958270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:38.720813 env[1728]: time="2024-12-13T02:22:38.720756994Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63d02ab2d2c5c2db3c32343c7cfb2d46d58c630291bd39e5c8e6724c2d8626f8 pid=2259 runtime=io.containerd.runc.v2 Dec 13 02:22:38.756557 systemd[1]: Started cri-containerd-63d02ab2d2c5c2db3c32343c7cfb2d46d58c630291bd39e5c8e6724c2d8626f8.scope. Dec 13 02:22:38.770754 kubelet[2221]: W1213 02:22:38.760613 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.161:6443: connect: connection refused Dec 13 02:22:38.770754 kubelet[2221]: E1213 02:22:38.760688 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:38.770884 env[1728]: time="2024-12-13T02:22:38.768970183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:38.770884 env[1728]: time="2024-12-13T02:22:38.769137207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:38.770884 env[1728]: time="2024-12-13T02:22:38.769203380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:38.770884 env[1728]: time="2024-12-13T02:22:38.769436850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be9e8975f613cadff87954178a80d16021665e8774425140546fcc37c5efe668 pid=2292 runtime=io.containerd.runc.v2 Dec 13 02:22:38.788637 env[1728]: time="2024-12-13T02:22:38.784874465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:38.788637 env[1728]: time="2024-12-13T02:22:38.784995572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:38.788637 env[1728]: time="2024-12-13T02:22:38.785030332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:38.788637 env[1728]: time="2024-12-13T02:22:38.785282671Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9956c7d18fb5e85bd3a1394957ab458cc55ff53a67a86cac604c4f49857532 pid=2289 runtime=io.containerd.runc.v2 Dec 13 02:22:38.828582 systemd[1]: Started cri-containerd-be9e8975f613cadff87954178a80d16021665e8774425140546fcc37c5efe668.scope. Dec 13 02:22:38.850357 systemd[1]: Started cri-containerd-7b9956c7d18fb5e85bd3a1394957ab458cc55ff53a67a86cac604c4f49857532.scope. Dec 13 02:22:38.914404 env[1728]: time="2024-12-13T02:22:38.914358726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-161,Uid:f488e8ce742499500acfe3873079c901,Namespace:kube-system,Attempt:0,} returns sandbox id \"63d02ab2d2c5c2db3c32343c7cfb2d46d58c630291bd39e5c8e6724c2d8626f8\"" Dec 13 02:22:38.925367 env[1728]: time="2024-12-13T02:22:38.925293764Z" level=info msg="CreateContainer within sandbox \"63d02ab2d2c5c2db3c32343c7cfb2d46d58c630291bd39e5c8e6724c2d8626f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:22:38.974540 env[1728]: time="2024-12-13T02:22:38.974465073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-161,Uid:f8f867d8de33f662a5756d106fd2950d,Namespace:kube-system,Attempt:0,} returns sandbox id \"be9e8975f613cadff87954178a80d16021665e8774425140546fcc37c5efe668\"" Dec 13 02:22:38.974803 env[1728]: time="2024-12-13T02:22:38.974771676Z" level=info msg="CreateContainer within sandbox \"63d02ab2d2c5c2db3c32343c7cfb2d46d58c630291bd39e5c8e6724c2d8626f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a75650c7a8009a1cd783be9f1cddb58c312df26994e16bcc4817fd9c0c8d3ddf\"" Dec 13 02:22:38.976818 env[1728]: time="2024-12-13T02:22:38.976780126Z" level=info msg="StartContainer for \"a75650c7a8009a1cd783be9f1cddb58c312df26994e16bcc4817fd9c0c8d3ddf\"" Dec 13 02:22:38.980848 env[1728]: time="2024-12-13T02:22:38.980804218Z" level=info msg="CreateContainer within sandbox \"be9e8975f613cadff87954178a80d16021665e8774425140546fcc37c5efe668\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:22:38.995489 env[1728]: time="2024-12-13T02:22:38.995428538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-161,Uid:07edeee396908ecdd75d43fb9c6153b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b9956c7d18fb5e85bd3a1394957ab458cc55ff53a67a86cac604c4f49857532\"" Dec 13 02:22:38.999019 kubelet[2221]: E1213 02:22:38.998836 2221 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.161:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.161:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-161.18109b4075bf1f9e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-161,UID:ip-172-31-16-161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-161,},FirstTimestamp:2024-12-13 02:22:37.693312926 +0000 UTC m=+0.558212490,LastTimestamp:2024-12-13 02:22:37.693312926 +0000 UTC m=+0.558212490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-161,}" Dec 13 02:22:39.000151 env[1728]: time="2024-12-13T02:22:39.000111150Z" level=info msg="CreateContainer within sandbox \"7b9956c7d18fb5e85bd3a1394957ab458cc55ff53a67a86cac604c4f49857532\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:22:39.011120 env[1728]: time="2024-12-13T02:22:39.011067601Z" level=info msg="CreateContainer within sandbox \"be9e8975f613cadff87954178a80d16021665e8774425140546fcc37c5efe668\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"491551f788712cfd6a28f25bbb0486715c5d4e92ba231b27c256a8e7c0c1db76\"" Dec 13 02:22:39.011955 env[1728]: time="2024-12-13T02:22:39.011925472Z" level=info msg="StartContainer for \"491551f788712cfd6a28f25bbb0486715c5d4e92ba231b27c256a8e7c0c1db76\"" Dec 13 02:22:39.021850 env[1728]: time="2024-12-13T02:22:39.019876665Z" level=info msg="CreateContainer within sandbox \"7b9956c7d18fb5e85bd3a1394957ab458cc55ff53a67a86cac604c4f49857532\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f849a6637ca68f3cd9c6da629be14cd85caade4dcdca3aa1e63cb95ad0a4699\"" Dec 13 02:22:39.021850 env[1728]: time="2024-12-13T02:22:39.020714248Z" level=info msg="StartContainer for \"4f849a6637ca68f3cd9c6da629be14cd85caade4dcdca3aa1e63cb95ad0a4699\"" Dec 13 02:22:39.021560 systemd[1]: Started cri-containerd-a75650c7a8009a1cd783be9f1cddb58c312df26994e16bcc4817fd9c0c8d3ddf.scope. Dec 13 02:22:39.066498 systemd[1]: Started cri-containerd-4f849a6637ca68f3cd9c6da629be14cd85caade4dcdca3aa1e63cb95ad0a4699.scope. Dec 13 02:22:39.079227 systemd[1]: Started cri-containerd-491551f788712cfd6a28f25bbb0486715c5d4e92ba231b27c256a8e7c0c1db76.scope. Dec 13 02:22:39.143738 kubelet[2221]: E1213 02:22:39.141026 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-161?timeout=10s\": dial tcp 172.31.16.161:6443: connect: connection refused" interval="1.6s" Dec 13 02:22:39.143738 kubelet[2221]: W1213 02:22:39.141456 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.161:6443: connect: connection refused Dec 13 02:22:39.143738 kubelet[2221]: E1213 02:22:39.143698 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:39.155446 env[1728]: time="2024-12-13T02:22:39.155398510Z" level=info msg="StartContainer for \"a75650c7a8009a1cd783be9f1cddb58c312df26994e16bcc4817fd9c0c8d3ddf\" returns successfully" Dec 13 02:22:39.222237 env[1728]: time="2024-12-13T02:22:39.222189555Z" level=info msg="StartContainer for \"4f849a6637ca68f3cd9c6da629be14cd85caade4dcdca3aa1e63cb95ad0a4699\" returns successfully" Dec 13 02:22:39.223337 env[1728]: time="2024-12-13T02:22:39.223305597Z" level=info msg="StartContainer for \"491551f788712cfd6a28f25bbb0486715c5d4e92ba231b27c256a8e7c0c1db76\" returns successfully" Dec 13 02:22:39.274103 kubelet[2221]: W1213 02:22:39.273974 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-161&limit=500&resourceVersion=0": dial tcp 172.31.16.161:6443: connect: connection refused Dec 13 02:22:39.274103 kubelet[2221]: E1213 02:22:39.274063 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-161&limit=500&resourceVersion=0\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:39.298915 kubelet[2221]: W1213 02:22:39.298793 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.161:6443: connect: connection refused Dec 13 02:22:39.298915 kubelet[2221]: E1213 02:22:39.298877 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:39.334104 kubelet[2221]: I1213 02:22:39.334016 2221 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-161" Dec 13 02:22:39.335149 kubelet[2221]: E1213 02:22:39.334616 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.161:6443/api/v1/nodes\": dial tcp 172.31.16.161:6443: connect: connection refused" node="ip-172-31-16-161" Dec 13 02:22:39.665934 kubelet[2221]: E1213 02:22:39.665827 2221 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.161:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:22:40.939170 kubelet[2221]: I1213 02:22:40.939141 2221 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-161" Dec 13 02:22:42.451967 kubelet[2221]: I1213 02:22:42.451922 2221 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-161" Dec 13 02:22:42.451967 kubelet[2221]: E1213 02:22:42.451971 2221 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-161\": node \"ip-172-31-16-161\" not found" Dec 13 02:22:42.570393 kubelet[2221]: E1213 02:22:42.569421 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 13 02:22:42.678938 kubelet[2221]: I1213 02:22:42.678894 2221 apiserver.go:52] "Watching apiserver" Dec 13 02:22:42.735204 kubelet[2221]: I1213 02:22:42.735057 2221 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:22:45.122557 systemd[1]: Reloading. Dec 13 02:22:45.269299 /usr/lib/systemd/system-generators/torcx-generator[2509]: time="2024-12-13T02:22:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:22:45.285643 /usr/lib/systemd/system-generators/torcx-generator[2509]: time="2024-12-13T02:22:45Z" level=info msg="torcx already run" Dec 13 02:22:45.387350 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:22:45.387382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:22:45.412653 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:22:45.602333 systemd[1]: Stopping kubelet.service... Dec 13 02:22:45.625466 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:22:45.625976 systemd[1]: Stopped kubelet.service. Dec 13 02:22:45.629067 systemd[1]: Starting kubelet.service... Dec 13 02:22:47.159232 systemd[1]: Started kubelet.service. Dec 13 02:22:47.313674 kubelet[2563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:22:47.313674 kubelet[2563]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:22:47.313674 kubelet[2563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:22:47.317881 kubelet[2563]: I1213 02:22:47.317773 2563 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:22:47.336349 kubelet[2563]: I1213 02:22:47.336312 2563 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:22:47.336641 kubelet[2563]: I1213 02:22:47.336559 2563 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:22:47.337136 kubelet[2563]: I1213 02:22:47.337122 2563 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:22:47.339145 kubelet[2563]: I1213 02:22:47.339120 2563 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:22:47.342324 kubelet[2563]: I1213 02:22:47.342302 2563 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:22:47.347929 sudo[2575]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:22:47.348263 sudo[2575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:22:47.359083 kubelet[2563]: E1213 02:22:47.359016 2563 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:22:47.359083 kubelet[2563]: I1213 02:22:47.359079 2563 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:22:47.362861 kubelet[2563]: I1213 02:22:47.362828 2563 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:22:47.363016 kubelet[2563]: I1213 02:22:47.362969 2563 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:22:47.363792 kubelet[2563]: I1213 02:22:47.363550 2563 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:22:47.364269 kubelet[2563]: I1213 02:22:47.363588 2563 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-161","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:22:47.364269 kubelet[2563]: I1213 02:22:47.364211 2563 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:22:47.364269 kubelet[2563]: I1213 02:22:47.364225 2563 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:22:47.364269 kubelet[2563]: I1213 02:22:47.364265 2563 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:22:47.365252 kubelet[2563]: I1213 02:22:47.364782 2563 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:22:47.365764 kubelet[2563]: I1213 02:22:47.365742 2563 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:22:47.365841 kubelet[2563]: I1213 02:22:47.365790 2563 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:22:47.365841 kubelet[2563]: I1213 02:22:47.365810 2563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:22:47.378321 kubelet[2563]: I1213 02:22:47.368983 2563 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:22:47.378321 kubelet[2563]: I1213 02:22:47.369562 2563 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:22:47.378321 kubelet[2563]: I1213 02:22:47.372246 2563 server.go:1269] "Started kubelet" Dec 13 02:22:47.378321 kubelet[2563]: I1213 02:22:47.375021 2563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:22:47.391856 kubelet[2563]: I1213 02:22:47.390526 2563 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:22:47.392458 kubelet[2563]: I1213 02:22:47.392064 2563 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:22:47.393566 kubelet[2563]: I1213 02:22:47.393484 2563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:22:47.393840 kubelet[2563]: I1213 02:22:47.393819 2563 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:22:47.394248 kubelet[2563]: I1213 02:22:47.394224 2563 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:22:47.397043 kubelet[2563]: I1213 02:22:47.397022 2563 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:22:47.397337 kubelet[2563]: E1213 02:22:47.397313 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-161\" not found" Dec 13 02:22:47.400715 kubelet[2563]: I1213 02:22:47.400691 2563 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:22:47.400873 kubelet[2563]: I1213 02:22:47.400860 2563 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:22:47.440276 kubelet[2563]: I1213 02:22:47.440245 2563 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:22:47.440447 kubelet[2563]: I1213 02:22:47.440362 2563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:22:47.473500 kubelet[2563]: I1213 02:22:47.473463 2563 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:22:47.484791 kubelet[2563]: I1213 02:22:47.484741 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:22:47.518313 kubelet[2563]: E1213 02:22:47.518277 2563 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:22:47.530597 kubelet[2563]: I1213 02:22:47.530560 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:22:47.530847 kubelet[2563]: I1213 02:22:47.530832 2563 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:22:47.530976 kubelet[2563]: I1213 02:22:47.530967 2563 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:22:47.531099 kubelet[2563]: E1213 02:22:47.531080 2563 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:22:47.631240 kubelet[2563]: E1213 02:22:47.631193 2563 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:22:47.640729 kubelet[2563]: I1213 02:22:47.640703 2563 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:22:47.640901 kubelet[2563]: I1213 02:22:47.640888 2563 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:22:47.640986 kubelet[2563]: I1213 02:22:47.640979 2563 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:22:47.641254 kubelet[2563]: I1213 02:22:47.641238 2563 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:22:47.641373 kubelet[2563]: I1213 02:22:47.641347 2563 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:22:47.641440 kubelet[2563]: I1213 02:22:47.641432 2563 policy_none.go:49] "None policy: Start" Dec 13 02:22:47.642333 kubelet[2563]: I1213 02:22:47.642317 2563 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:22:47.642471 kubelet[2563]: I1213 02:22:47.642461 2563 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:22:47.642790 kubelet[2563]: I1213 02:22:47.642775 2563 state_mem.go:75] "Updated machine memory state" Dec 13 02:22:47.660032 kubelet[2563]: I1213 02:22:47.660006 2563 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:22:47.662019 kubelet[2563]: I1213 02:22:47.661992 2563 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:22:47.662230 kubelet[2563]: I1213 02:22:47.662190 2563 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:22:47.662665 kubelet[2563]: I1213 02:22:47.662649 2563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:22:47.787349 kubelet[2563]: I1213 02:22:47.787255 2563 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-161" Dec 13 02:22:47.801151 kubelet[2563]: I1213 02:22:47.801119 2563 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-16-161" Dec 13 02:22:47.801324 kubelet[2563]: I1213 02:22:47.801210 2563 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-161" Dec 13 02:22:47.845123 kubelet[2563]: E1213 02:22:47.845094 2563 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-161\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:47.845383 kubelet[2563]: E1213 02:22:47.845078 2563 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-161\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-161" Dec 13 02:22:47.918961 kubelet[2563]: I1213 02:22:47.918930 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8f867d8de33f662a5756d106fd2950d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-161\" (UID: \"f8f867d8de33f662a5756d106fd2950d\") " pod="kube-system/kube-apiserver-ip-172-31-16-161" Dec 13 02:22:47.919190 kubelet[2563]: I1213 02:22:47.919174 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:47.919315 kubelet[2563]: I1213 02:22:47.919301 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8f867d8de33f662a5756d106fd2950d-ca-certs\") pod \"kube-apiserver-ip-172-31-16-161\" (UID: \"f8f867d8de33f662a5756d106fd2950d\") " pod="kube-system/kube-apiserver-ip-172-31-16-161" Dec 13 02:22:47.919425 kubelet[2563]: I1213 02:22:47.919399 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8f867d8de33f662a5756d106fd2950d-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-161\" (UID: \"f8f867d8de33f662a5756d106fd2950d\") " pod="kube-system/kube-apiserver-ip-172-31-16-161" Dec 13 02:22:47.919490 kubelet[2563]: I1213 02:22:47.919428 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:47.919490 kubelet[2563]: I1213 02:22:47.919472 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:47.919619 kubelet[2563]: I1213 02:22:47.919497 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:47.919619 kubelet[2563]: I1213 02:22:47.919540 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07edeee396908ecdd75d43fb9c6153b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-161\" (UID: \"07edeee396908ecdd75d43fb9c6153b4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-161" Dec 13 02:22:47.919619 kubelet[2563]: I1213 02:22:47.919567 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f488e8ce742499500acfe3873079c901-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-161\" (UID: \"f488e8ce742499500acfe3873079c901\") " pod="kube-system/kube-scheduler-ip-172-31-16-161" Dec 13 02:22:48.348020 sudo[2575]: pam_unix(sudo:session): session closed for user root Dec 13 02:22:48.379986 kubelet[2563]: I1213 02:22:48.379951 2563 apiserver.go:52] "Watching apiserver" Dec 13 02:22:48.401651 kubelet[2563]: I1213 02:22:48.401594 2563 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:22:48.528347 kubelet[2563]: I1213 02:22:48.528276 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-161" podStartSLOduration=1.52825152 podStartE2EDuration="1.52825152s" podCreationTimestamp="2024-12-13 02:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:22:48.516743476 +0000 UTC m=+1.321117266" watchObservedRunningTime="2024-12-13 02:22:48.52825152 +0000 UTC m=+1.332625316" Dec 13 02:22:48.548750 kubelet[2563]: I1213 02:22:48.548701 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-161" podStartSLOduration=5.548679687 podStartE2EDuration="5.548679687s" podCreationTimestamp="2024-12-13 02:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:22:48.529620666 +0000 UTC m=+1.333994460" watchObservedRunningTime="2024-12-13 02:22:48.548679687 +0000 UTC m=+1.353053478" Dec 13 02:22:48.610638 kubelet[2563]: I1213 02:22:48.610500 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-161" podStartSLOduration=4.610478489 podStartE2EDuration="4.610478489s" podCreationTimestamp="2024-12-13 02:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:22:48.549250651 +0000 UTC m=+1.353624445" watchObservedRunningTime="2024-12-13 02:22:48.610478489 +0000 UTC m=+1.414852283" Dec 13 02:22:49.139706 update_engine[1723]: I1213 02:22:49.139653 1723 update_attempter.cc:509] Updating boot flags... Dec 13 02:22:51.029237 sudo[1966]: pam_unix(sudo:session): session closed for user root Dec 13 02:22:51.052626 sshd[1963]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:51.056709 systemd[1]: sshd@4-172.31.16.161:22-139.178.68.195:40486.service: Deactivated successfully. Dec 13 02:22:51.058079 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:22:51.058418 systemd[1]: session-5.scope: Consumed 5.255s CPU time. Dec 13 02:22:51.059672 systemd-logind[1722]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:22:51.061042 systemd-logind[1722]: Removed session 5. Dec 13 02:22:52.180897 kubelet[2563]: I1213 02:22:52.180870 2563 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:22:52.182453 env[1728]: time="2024-12-13T02:22:52.182404261Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:22:52.185304 kubelet[2563]: I1213 02:22:52.185102 2563 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:22:52.831829 systemd[1]: Created slice kubepods-burstable-podddbe28ac_0859_4e64_93fa_d3d7fbbad4ff.slice. Dec 13 02:22:52.859439 systemd[1]: Created slice kubepods-besteffort-podcf8b9269_3c5d_4edd_891c_595996dd463b.slice. Dec 13 02:22:52.863857 kubelet[2563]: I1213 02:22:52.863341 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-run\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.863857 kubelet[2563]: I1213 02:22:52.863387 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-hostproc\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.863857 kubelet[2563]: I1213 02:22:52.863410 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf8b9269-3c5d-4edd-891c-595996dd463b-lib-modules\") pod \"kube-proxy-dgw4f\" (UID: \"cf8b9269-3c5d-4edd-891c-595996dd463b\") " pod="kube-system/kube-proxy-dgw4f" Dec 13 02:22:52.863857 kubelet[2563]: I1213 02:22:52.863433 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59mb7\" (UniqueName: \"kubernetes.io/projected/cf8b9269-3c5d-4edd-891c-595996dd463b-kube-api-access-59mb7\") pod \"kube-proxy-dgw4f\" (UID: \"cf8b9269-3c5d-4edd-891c-595996dd463b\") " pod="kube-system/kube-proxy-dgw4f" Dec 13 02:22:52.863857 kubelet[2563]: I1213 02:22:52.863459 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf8b9269-3c5d-4edd-891c-595996dd463b-kube-proxy\") pod \"kube-proxy-dgw4f\" (UID: \"cf8b9269-3c5d-4edd-891c-595996dd463b\") " pod="kube-system/kube-proxy-dgw4f" Dec 13 02:22:52.863857 kubelet[2563]: I1213 02:22:52.863484 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-etc-cni-netd\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.864271 kubelet[2563]: I1213 02:22:52.863520 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-bpf-maps\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.864271 kubelet[2563]: I1213 02:22:52.863543 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cni-path\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.864271 kubelet[2563]: I1213 02:22:52.863567 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf8b9269-3c5d-4edd-891c-595996dd463b-xtables-lock\") pod \"kube-proxy-dgw4f\" (UID: \"cf8b9269-3c5d-4edd-891c-595996dd463b\") " pod="kube-system/kube-proxy-dgw4f" Dec 13 02:22:52.864271 kubelet[2563]: I1213 02:22:52.863594 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-cgroup\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.864271 kubelet[2563]: I1213 02:22:52.863617 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-clustermesh-secrets\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.864271 kubelet[2563]: I1213 02:22:52.863645 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsmj7\" (UniqueName: \"kubernetes.io/projected/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-kube-api-access-qsmj7\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.865130 kubelet[2563]: I1213 02:22:52.863669 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-host-proc-sys-net\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.865130 kubelet[2563]: I1213 02:22:52.863692 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-host-proc-sys-kernel\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.865130 kubelet[2563]: I1213 02:22:52.863714 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-xtables-lock\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.865130 kubelet[2563]: I1213 02:22:52.863738 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-config-path\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.865130 kubelet[2563]: I1213 02:22:52.863761 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-hubble-tls\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.865130 kubelet[2563]: I1213 02:22:52.863789 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-lib-modules\") pod \"cilium-x5lwj\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " pod="kube-system/cilium-x5lwj" Dec 13 02:22:52.965782 kubelet[2563]: I1213 02:22:52.965708 2563 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 02:22:53.154537 env[1728]: time="2024-12-13T02:22:53.154416794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5lwj,Uid:ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:53.170682 env[1728]: time="2024-12-13T02:22:53.170217080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgw4f,Uid:cf8b9269-3c5d-4edd-891c-595996dd463b,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:53.202997 env[1728]: time="2024-12-13T02:22:53.196254534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:53.202997 env[1728]: time="2024-12-13T02:22:53.196311399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:53.202997 env[1728]: time="2024-12-13T02:22:53.196329693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:53.202997 env[1728]: time="2024-12-13T02:22:53.196504967Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a9bac884bf8ffa593dff7ecad4cb33d0c2723086b50f3a4c12814239bb2c65a pid=2747 runtime=io.containerd.runc.v2 Dec 13 02:22:53.204585 env[1728]: time="2024-12-13T02:22:53.193927693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:53.204585 env[1728]: time="2024-12-13T02:22:53.193976260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:53.204585 env[1728]: time="2024-12-13T02:22:53.194017265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:53.204585 env[1728]: time="2024-12-13T02:22:53.194209148Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034 pid=2744 runtime=io.containerd.runc.v2 Dec 13 02:22:53.260965 systemd[1]: Started cri-containerd-17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034.scope. Dec 13 02:22:53.275189 systemd[1]: Started cri-containerd-1a9bac884bf8ffa593dff7ecad4cb33d0c2723086b50f3a4c12814239bb2c65a.scope. Dec 13 02:22:53.287886 systemd[1]: Created slice kubepods-besteffort-podb8863b29_673f_45b4_bb60_4110267ae34d.slice. Dec 13 02:22:53.325606 env[1728]: time="2024-12-13T02:22:53.325552200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5lwj,Uid:ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\"" Dec 13 02:22:53.328259 env[1728]: time="2024-12-13T02:22:53.328216128Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:22:53.370548 kubelet[2563]: I1213 02:22:53.370273 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7gt\" (UniqueName: \"kubernetes.io/projected/b8863b29-673f-45b4-bb60-4110267ae34d-kube-api-access-zd7gt\") pod \"cilium-operator-5d85765b45-8vhrf\" (UID: \"b8863b29-673f-45b4-bb60-4110267ae34d\") " pod="kube-system/cilium-operator-5d85765b45-8vhrf" Dec 13 02:22:53.370548 kubelet[2563]: I1213 02:22:53.370340 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8863b29-673f-45b4-bb60-4110267ae34d-cilium-config-path\") pod \"cilium-operator-5d85765b45-8vhrf\" (UID: \"b8863b29-673f-45b4-bb60-4110267ae34d\") " pod="kube-system/cilium-operator-5d85765b45-8vhrf" Dec 13 02:22:53.473549 env[1728]: time="2024-12-13T02:22:53.472622059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgw4f,Uid:cf8b9269-3c5d-4edd-891c-595996dd463b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a9bac884bf8ffa593dff7ecad4cb33d0c2723086b50f3a4c12814239bb2c65a\"" Dec 13 02:22:53.483065 env[1728]: time="2024-12-13T02:22:53.483021743Z" level=info msg="CreateContainer within sandbox \"1a9bac884bf8ffa593dff7ecad4cb33d0c2723086b50f3a4c12814239bb2c65a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:22:53.513139 env[1728]: time="2024-12-13T02:22:53.513096100Z" level=info msg="CreateContainer within sandbox \"1a9bac884bf8ffa593dff7ecad4cb33d0c2723086b50f3a4c12814239bb2c65a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"647bf740cf7f9bf7f4f903f5bf7524d274e3a934c864c630ec206f5b13caa788\"" Dec 13 02:22:53.515483 env[1728]: time="2024-12-13T02:22:53.513960767Z" level=info msg="StartContainer for \"647bf740cf7f9bf7f4f903f5bf7524d274e3a934c864c630ec206f5b13caa788\"" Dec 13 02:22:53.553000 systemd[1]: Started cri-containerd-647bf740cf7f9bf7f4f903f5bf7524d274e3a934c864c630ec206f5b13caa788.scope. Dec 13 02:22:53.595419 env[1728]: time="2024-12-13T02:22:53.595179988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8vhrf,Uid:b8863b29-673f-45b4-bb60-4110267ae34d,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:53.605331 env[1728]: time="2024-12-13T02:22:53.605281368Z" level=info msg="StartContainer for \"647bf740cf7f9bf7f4f903f5bf7524d274e3a934c864c630ec206f5b13caa788\" returns successfully" Dec 13 02:22:53.641239 env[1728]: time="2024-12-13T02:22:53.641138531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:53.641239 env[1728]: time="2024-12-13T02:22:53.641188794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:53.641733 env[1728]: time="2024-12-13T02:22:53.641214396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:53.644464 env[1728]: time="2024-12-13T02:22:53.643113585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa pid=2861 runtime=io.containerd.runc.v2 Dec 13 02:22:53.666667 systemd[1]: Started cri-containerd-c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa.scope. Dec 13 02:22:53.752124 env[1728]: time="2024-12-13T02:22:53.750496666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8vhrf,Uid:b8863b29-673f-45b4-bb60-4110267ae34d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\"" Dec 13 02:22:57.550398 kubelet[2563]: I1213 02:22:57.550321 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dgw4f" podStartSLOduration=5.550299082 podStartE2EDuration="5.550299082s" podCreationTimestamp="2024-12-13 02:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:22:53.635202898 +0000 UTC m=+6.439576694" watchObservedRunningTime="2024-12-13 02:22:57.550299082 +0000 UTC m=+10.354672880" Dec 13 02:22:59.625759 amazon-ssm-agent[1710]: 2024-12-13 02:22:59 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 02:23:03.546103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224864400.mount: Deactivated successfully. Dec 13 02:23:07.404237 env[1728]: time="2024-12-13T02:23:07.404184114Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:07.407343 env[1728]: time="2024-12-13T02:23:07.407296517Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:07.409624 env[1728]: time="2024-12-13T02:23:07.409581374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:07.410302 env[1728]: time="2024-12-13T02:23:07.410265197Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:23:07.412252 env[1728]: time="2024-12-13T02:23:07.412211886Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:23:07.416391 env[1728]: time="2024-12-13T02:23:07.416249421Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:23:07.437012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516079776.mount: Deactivated successfully. Dec 13 02:23:07.447370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019082589.mount: Deactivated successfully. Dec 13 02:23:07.455339 env[1728]: time="2024-12-13T02:23:07.455283582Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\"" Dec 13 02:23:07.456326 env[1728]: time="2024-12-13T02:23:07.456294985Z" level=info msg="StartContainer for \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\"" Dec 13 02:23:07.501449 systemd[1]: Started cri-containerd-5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde.scope. Dec 13 02:23:07.555550 env[1728]: time="2024-12-13T02:23:07.554687472Z" level=info msg="StartContainer for \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\" returns successfully" Dec 13 02:23:07.566583 systemd[1]: cri-containerd-5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde.scope: Deactivated successfully. Dec 13 02:23:07.724777 env[1728]: time="2024-12-13T02:23:07.724623807Z" level=info msg="shim disconnected" id=5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde Dec 13 02:23:07.725171 env[1728]: time="2024-12-13T02:23:07.725113861Z" level=warning msg="cleaning up after shim disconnected" id=5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde namespace=k8s.io Dec 13 02:23:07.726037 env[1728]: time="2024-12-13T02:23:07.725982936Z" level=info msg="cleaning up dead shim" Dec 13 02:23:07.752590 env[1728]: time="2024-12-13T02:23:07.752498931Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:23:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3078 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:23:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Dec 13 02:23:08.431979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde-rootfs.mount: Deactivated successfully. Dec 13 02:23:08.683755 env[1728]: time="2024-12-13T02:23:08.682099349Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:23:08.714494 env[1728]: time="2024-12-13T02:23:08.714439809Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\"" Dec 13 02:23:08.716237 env[1728]: time="2024-12-13T02:23:08.716137953Z" level=info msg="StartContainer for \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\"" Dec 13 02:23:08.763308 systemd[1]: Started cri-containerd-b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f.scope. Dec 13 02:23:08.801604 env[1728]: time="2024-12-13T02:23:08.800289746Z" level=info msg="StartContainer for \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\" returns successfully" Dec 13 02:23:08.854210 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:23:08.854914 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:23:08.855317 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:23:08.859398 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:23:08.860267 systemd[1]: cri-containerd-b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f.scope: Deactivated successfully. Dec 13 02:23:08.912932 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:23:08.934684 env[1728]: time="2024-12-13T02:23:08.934478452Z" level=info msg="shim disconnected" id=b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f Dec 13 02:23:08.934684 env[1728]: time="2024-12-13T02:23:08.934574387Z" level=warning msg="cleaning up after shim disconnected" id=b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f namespace=k8s.io Dec 13 02:23:08.934684 env[1728]: time="2024-12-13T02:23:08.934591222Z" level=info msg="cleaning up dead shim" Dec 13 02:23:08.944359 env[1728]: time="2024-12-13T02:23:08.944310596Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:23:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3147 runtime=io.containerd.runc.v2\n" Dec 13 02:23:09.431303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f-rootfs.mount: Deactivated successfully. Dec 13 02:23:09.705733 env[1728]: time="2024-12-13T02:23:09.703450062Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:23:09.744404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089326326.mount: Deactivated successfully. Dec 13 02:23:09.764110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1269942621.mount: Deactivated successfully. Dec 13 02:23:09.780680 env[1728]: time="2024-12-13T02:23:09.780628066Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\"" Dec 13 02:23:09.783304 env[1728]: time="2024-12-13T02:23:09.783249477Z" level=info msg="StartContainer for \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\"" Dec 13 02:23:09.816382 systemd[1]: Started cri-containerd-4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae.scope. Dec 13 02:23:09.884121 env[1728]: time="2024-12-13T02:23:09.884085260Z" level=info msg="StartContainer for \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\" returns successfully" Dec 13 02:23:09.900077 systemd[1]: cri-containerd-4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae.scope: Deactivated successfully. Dec 13 02:23:10.032996 env[1728]: time="2024-12-13T02:23:10.031971765Z" level=info msg="shim disconnected" id=4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae Dec 13 02:23:10.032996 env[1728]: time="2024-12-13T02:23:10.032133255Z" level=warning msg="cleaning up after shim disconnected" id=4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae namespace=k8s.io Dec 13 02:23:10.032996 env[1728]: time="2024-12-13T02:23:10.032180719Z" level=info msg="cleaning up dead shim" Dec 13 02:23:10.076947 env[1728]: time="2024-12-13T02:23:10.076894170Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:23:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3207 runtime=io.containerd.runc.v2\n" Dec 13 02:23:10.326336 env[1728]: time="2024-12-13T02:23:10.325983124Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:10.328703 env[1728]: time="2024-12-13T02:23:10.328657512Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:10.332229 env[1728]: time="2024-12-13T02:23:10.332112997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:10.336283 env[1728]: time="2024-12-13T02:23:10.336225810Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:23:10.340824 env[1728]: time="2024-12-13T02:23:10.340781262Z" level=info msg="CreateContainer within sandbox \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:23:10.368133 env[1728]: time="2024-12-13T02:23:10.367672256Z" level=info msg="CreateContainer within sandbox \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\"" Dec 13 02:23:10.369106 env[1728]: time="2024-12-13T02:23:10.369050192Z" level=info msg="StartContainer for \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\"" Dec 13 02:23:10.394019 systemd[1]: Started cri-containerd-58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516.scope. Dec 13 02:23:10.448608 env[1728]: time="2024-12-13T02:23:10.448560743Z" level=info msg="StartContainer for \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\" returns successfully" Dec 13 02:23:10.690082 env[1728]: time="2024-12-13T02:23:10.690029266Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:23:10.714414 env[1728]: time="2024-12-13T02:23:10.714313741Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\"" Dec 13 02:23:10.715185 env[1728]: time="2024-12-13T02:23:10.715142575Z" level=info msg="StartContainer for \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\"" Dec 13 02:23:10.771131 systemd[1]: Started cri-containerd-d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29.scope. Dec 13 02:23:10.840572 kubelet[2563]: I1213 02:23:10.840501 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8vhrf" podStartSLOduration=1.255850828 podStartE2EDuration="17.840482408s" podCreationTimestamp="2024-12-13 02:22:53 +0000 UTC" firstStartedPulling="2024-12-13 02:22:53.753047175 +0000 UTC m=+6.557420959" lastFinishedPulling="2024-12-13 02:23:10.337678763 +0000 UTC m=+23.142052539" observedRunningTime="2024-12-13 02:23:10.840440545 +0000 UTC m=+23.644814336" watchObservedRunningTime="2024-12-13 02:23:10.840482408 +0000 UTC m=+23.644856204" Dec 13 02:23:10.883163 env[1728]: time="2024-12-13T02:23:10.883027385Z" level=info msg="StartContainer for \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\" returns successfully" Dec 13 02:23:10.886192 systemd[1]: cri-containerd-d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29.scope: Deactivated successfully. Dec 13 02:23:10.949129 env[1728]: time="2024-12-13T02:23:10.948422545Z" level=info msg="shim disconnected" id=d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29 Dec 13 02:23:10.949574 env[1728]: time="2024-12-13T02:23:10.949547515Z" level=warning msg="cleaning up after shim disconnected" id=d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29 namespace=k8s.io Dec 13 02:23:10.949673 env[1728]: time="2024-12-13T02:23:10.949661395Z" level=info msg="cleaning up dead shim" Dec 13 02:23:10.973280 env[1728]: time="2024-12-13T02:23:10.973223775Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:23:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3297 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:23:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Dec 13 02:23:11.432618 systemd[1]: run-containerd-runc-k8s.io-d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29-runc.gvm5ew.mount: Deactivated successfully. Dec 13 02:23:11.433761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29-rootfs.mount: Deactivated successfully. Dec 13 02:23:11.745744 env[1728]: time="2024-12-13T02:23:11.745177090Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:23:11.774659 env[1728]: time="2024-12-13T02:23:11.774603210Z" level=info msg="CreateContainer within sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\"" Dec 13 02:23:11.775576 env[1728]: time="2024-12-13T02:23:11.775545353Z" level=info msg="StartContainer for \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\"" Dec 13 02:23:11.827089 systemd[1]: Started cri-containerd-6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83.scope. Dec 13 02:23:11.948620 env[1728]: time="2024-12-13T02:23:11.948560926Z" level=info msg="StartContainer for \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\" returns successfully" Dec 13 02:23:12.238467 kubelet[2563]: I1213 02:23:12.234323 2563 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 02:23:12.315521 systemd[1]: Created slice kubepods-burstable-podd5158bea_ca55_43de_9fbe_d99281ed0280.slice. Dec 13 02:23:12.328728 systemd[1]: Created slice kubepods-burstable-podc453a818_57e2_4827_88f5_cfb93c3fec40.slice. Dec 13 02:23:12.389293 kubelet[2563]: I1213 02:23:12.389245 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9l2\" (UniqueName: \"kubernetes.io/projected/d5158bea-ca55-43de-9fbe-d99281ed0280-kube-api-access-bq9l2\") pod \"coredns-6f6b679f8f-9s425\" (UID: \"d5158bea-ca55-43de-9fbe-d99281ed0280\") " pod="kube-system/coredns-6f6b679f8f-9s425" Dec 13 02:23:12.389598 kubelet[2563]: I1213 02:23:12.389321 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c453a818-57e2-4827-88f5-cfb93c3fec40-config-volume\") pod \"coredns-6f6b679f8f-85pr4\" (UID: \"c453a818-57e2-4827-88f5-cfb93c3fec40\") " pod="kube-system/coredns-6f6b679f8f-85pr4" Dec 13 02:23:12.389598 kubelet[2563]: I1213 02:23:12.389470 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5158bea-ca55-43de-9fbe-d99281ed0280-config-volume\") pod \"coredns-6f6b679f8f-9s425\" (UID: \"d5158bea-ca55-43de-9fbe-d99281ed0280\") " pod="kube-system/coredns-6f6b679f8f-9s425" Dec 13 02:23:12.389598 kubelet[2563]: I1213 02:23:12.389520 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2w2w\" (UniqueName: \"kubernetes.io/projected/c453a818-57e2-4827-88f5-cfb93c3fec40-kube-api-access-p2w2w\") pod \"coredns-6f6b679f8f-85pr4\" (UID: \"c453a818-57e2-4827-88f5-cfb93c3fec40\") " pod="kube-system/coredns-6f6b679f8f-85pr4" Dec 13 02:23:12.638769 env[1728]: time="2024-12-13T02:23:12.637596614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9s425,Uid:d5158bea-ca55-43de-9fbe-d99281ed0280,Namespace:kube-system,Attempt:0,}" Dec 13 02:23:12.642493 env[1728]: time="2024-12-13T02:23:12.642449763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-85pr4,Uid:c453a818-57e2-4827-88f5-cfb93c3fec40,Namespace:kube-system,Attempt:0,}" Dec 13 02:23:14.971972 (udev-worker)[3422]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:14.978620 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:23:14.978864 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:23:14.975129 systemd-networkd[1462]: cilium_host: Link UP Dec 13 02:23:14.976324 systemd-networkd[1462]: cilium_net: Link UP Dec 13 02:23:14.976548 systemd-networkd[1462]: cilium_net: Gained carrier Dec 13 02:23:14.977215 systemd-networkd[1462]: cilium_host: Gained carrier Dec 13 02:23:14.977391 (udev-worker)[3464]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:15.180441 systemd-networkd[1462]: cilium_net: Gained IPv6LL Dec 13 02:23:15.207593 systemd-networkd[1462]: cilium_vxlan: Link UP Dec 13 02:23:15.207603 systemd-networkd[1462]: cilium_vxlan: Gained carrier Dec 13 02:23:15.395989 systemd-networkd[1462]: cilium_host: Gained IPv6LL Dec 13 02:23:16.063541 kernel: NET: Registered PF_ALG protocol family Dec 13 02:23:16.851772 systemd-networkd[1462]: cilium_vxlan: Gained IPv6LL Dec 13 02:23:17.327544 (udev-worker)[3475]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:17.344427 systemd-networkd[1462]: lxc_health: Link UP Dec 13 02:23:17.365379 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:23:17.365790 systemd-networkd[1462]: lxc_health: Gained carrier Dec 13 02:23:17.769538 systemd-networkd[1462]: lxc86333a7f944e: Link UP Dec 13 02:23:17.777540 kernel: eth0: renamed from tmp420f6 Dec 13 02:23:17.788305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc86333a7f944e: link becomes ready Dec 13 02:23:17.787608 systemd-networkd[1462]: lxc86333a7f944e: Gained carrier Dec 13 02:23:17.812392 systemd-networkd[1462]: lxcb262046fff75: Link UP Dec 13 02:23:17.823959 kernel: eth0: renamed from tmp27cef Dec 13 02:23:17.832558 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb262046fff75: link becomes ready Dec 13 02:23:17.832798 systemd-networkd[1462]: lxcb262046fff75: Gained carrier Dec 13 02:23:17.833672 (udev-worker)[3791]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:18.579850 systemd-networkd[1462]: lxc_health: Gained IPv6LL Dec 13 02:23:18.835699 systemd-networkd[1462]: lxc86333a7f944e: Gained IPv6LL Dec 13 02:23:19.216953 kubelet[2563]: I1213 02:23:19.212472 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x5lwj" podStartSLOduration=13.122314027 podStartE2EDuration="27.207074219s" podCreationTimestamp="2024-12-13 02:22:52 +0000 UTC" firstStartedPulling="2024-12-13 02:22:53.327199788 +0000 UTC m=+6.131573574" lastFinishedPulling="2024-12-13 02:23:07.411959974 +0000 UTC m=+20.216333766" observedRunningTime="2024-12-13 02:23:12.752816422 +0000 UTC m=+25.557190227" watchObservedRunningTime="2024-12-13 02:23:19.207074219 +0000 UTC m=+32.011448014" Dec 13 02:23:19.697913 systemd-networkd[1462]: lxcb262046fff75: Gained IPv6LL Dec 13 02:23:24.210981 env[1728]: time="2024-12-13T02:23:24.210719113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:23:24.211494 env[1728]: time="2024-12-13T02:23:24.210996819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:23:24.211494 env[1728]: time="2024-12-13T02:23:24.211044953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:23:24.214594 env[1728]: time="2024-12-13T02:23:24.211372105Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/420f6d5d69912aa3621b71d2f482b43251fd834d74d9687d46964ed69b64c0c9 pid=3840 runtime=io.containerd.runc.v2 Dec 13 02:23:24.223626 env[1728]: time="2024-12-13T02:23:24.215675254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:23:24.223626 env[1728]: time="2024-12-13T02:23:24.215736372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:23:24.223626 env[1728]: time="2024-12-13T02:23:24.215753438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:23:24.223626 env[1728]: time="2024-12-13T02:23:24.221089763Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27cef3adbf7aa8f2a0045c3715e5932263f882a07287d546f42cf9be47832879 pid=3854 runtime=io.containerd.runc.v2 Dec 13 02:23:24.261745 systemd[1]: Started cri-containerd-420f6d5d69912aa3621b71d2f482b43251fd834d74d9687d46964ed69b64c0c9.scope. Dec 13 02:23:24.282455 systemd[1]: run-containerd-runc-k8s.io-420f6d5d69912aa3621b71d2f482b43251fd834d74d9687d46964ed69b64c0c9-runc.e0FJ1D.mount: Deactivated successfully. Dec 13 02:23:24.337329 systemd[1]: Started cri-containerd-27cef3adbf7aa8f2a0045c3715e5932263f882a07287d546f42cf9be47832879.scope. Dec 13 02:23:24.424977 env[1728]: time="2024-12-13T02:23:24.424845140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-85pr4,Uid:c453a818-57e2-4827-88f5-cfb93c3fec40,Namespace:kube-system,Attempt:0,} returns sandbox id \"420f6d5d69912aa3621b71d2f482b43251fd834d74d9687d46964ed69b64c0c9\"" Dec 13 02:23:24.434279 env[1728]: time="2024-12-13T02:23:24.434212936Z" level=info msg="CreateContainer within sandbox \"420f6d5d69912aa3621b71d2f482b43251fd834d74d9687d46964ed69b64c0c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:23:24.473398 env[1728]: time="2024-12-13T02:23:24.472560511Z" level=info msg="CreateContainer within sandbox \"420f6d5d69912aa3621b71d2f482b43251fd834d74d9687d46964ed69b64c0c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc7d4082aa97046064070e65bb1964ace6638911b336bd1df80782f5bd1e214e\"" Dec 13 02:23:24.475036 env[1728]: time="2024-12-13T02:23:24.475001966Z" level=info msg="StartContainer for \"cc7d4082aa97046064070e65bb1964ace6638911b336bd1df80782f5bd1e214e\"" Dec 13 02:23:24.485319 env[1728]: time="2024-12-13T02:23:24.485264221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9s425,Uid:d5158bea-ca55-43de-9fbe-d99281ed0280,Namespace:kube-system,Attempt:0,} returns sandbox id \"27cef3adbf7aa8f2a0045c3715e5932263f882a07287d546f42cf9be47832879\"" Dec 13 02:23:24.497095 env[1728]: time="2024-12-13T02:23:24.497049896Z" level=info msg="CreateContainer within sandbox \"27cef3adbf7aa8f2a0045c3715e5932263f882a07287d546f42cf9be47832879\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:23:24.511088 systemd[1]: Started cri-containerd-cc7d4082aa97046064070e65bb1964ace6638911b336bd1df80782f5bd1e214e.scope. Dec 13 02:23:24.541646 env[1728]: time="2024-12-13T02:23:24.540447471Z" level=info msg="CreateContainer within sandbox \"27cef3adbf7aa8f2a0045c3715e5932263f882a07287d546f42cf9be47832879\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"843a340b0e6ce2feda94a50112936eb932a61bc567120b4c58a38ed25a799105\"" Dec 13 02:23:24.542856 env[1728]: time="2024-12-13T02:23:24.542821601Z" level=info msg="StartContainer for \"843a340b0e6ce2feda94a50112936eb932a61bc567120b4c58a38ed25a799105\"" Dec 13 02:23:24.578139 systemd[1]: Started cri-containerd-843a340b0e6ce2feda94a50112936eb932a61bc567120b4c58a38ed25a799105.scope. Dec 13 02:23:24.615308 env[1728]: time="2024-12-13T02:23:24.615254112Z" level=info msg="StartContainer for \"cc7d4082aa97046064070e65bb1964ace6638911b336bd1df80782f5bd1e214e\" returns successfully" Dec 13 02:23:24.655369 env[1728]: time="2024-12-13T02:23:24.655312213Z" level=info msg="StartContainer for \"843a340b0e6ce2feda94a50112936eb932a61bc567120b4c58a38ed25a799105\" returns successfully" Dec 13 02:23:24.785599 kubelet[2563]: I1213 02:23:24.785451 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-85pr4" podStartSLOduration=31.785412456 podStartE2EDuration="31.785412456s" podCreationTimestamp="2024-12-13 02:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:23:24.784812514 +0000 UTC m=+37.589186309" watchObservedRunningTime="2024-12-13 02:23:24.785412456 +0000 UTC m=+37.589786250" Dec 13 02:23:24.804251 kubelet[2563]: I1213 02:23:24.804177 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9s425" podStartSLOduration=31.804155664 podStartE2EDuration="31.804155664s" podCreationTimestamp="2024-12-13 02:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:23:24.804080827 +0000 UTC m=+37.608454622" watchObservedRunningTime="2024-12-13 02:23:24.804155664 +0000 UTC m=+37.608529459" Dec 13 02:23:34.576459 systemd[1]: Started sshd@5-172.31.16.161:22-139.178.68.195:42016.service. Dec 13 02:23:34.776299 sshd[3997]: Accepted publickey for core from 139.178.68.195 port 42016 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:34.778752 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:34.785594 systemd-logind[1722]: New session 6 of user core. Dec 13 02:23:34.785722 systemd[1]: Started session-6.scope. Dec 13 02:23:35.130318 sshd[3997]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:35.133688 systemd[1]: sshd@5-172.31.16.161:22-139.178.68.195:42016.service: Deactivated successfully. Dec 13 02:23:35.134842 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:23:35.135711 systemd-logind[1722]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:23:35.136723 systemd-logind[1722]: Removed session 6. Dec 13 02:23:40.160896 systemd[1]: Started sshd@6-172.31.16.161:22-139.178.68.195:47050.service. Dec 13 02:23:40.331536 sshd[4009]: Accepted publickey for core from 139.178.68.195 port 47050 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:40.334132 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:40.344592 systemd-logind[1722]: New session 7 of user core. Dec 13 02:23:40.344701 systemd[1]: Started session-7.scope. Dec 13 02:23:40.586571 sshd[4009]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:40.600964 systemd[1]: sshd@6-172.31.16.161:22-139.178.68.195:47050.service: Deactivated successfully. Dec 13 02:23:40.603971 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:23:40.606180 systemd-logind[1722]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:23:40.609020 systemd-logind[1722]: Removed session 7. Dec 13 02:23:45.619206 systemd[1]: Started sshd@7-172.31.16.161:22-139.178.68.195:47058.service. Dec 13 02:23:45.822322 sshd[4021]: Accepted publickey for core from 139.178.68.195 port 47058 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:45.824254 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:45.830613 systemd-logind[1722]: New session 8 of user core. Dec 13 02:23:45.830945 systemd[1]: Started session-8.scope. Dec 13 02:23:46.072087 sshd[4021]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:46.077023 systemd[1]: sshd@7-172.31.16.161:22-139.178.68.195:47058.service: Deactivated successfully. Dec 13 02:23:46.078544 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:23:46.079634 systemd-logind[1722]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:23:46.080561 systemd-logind[1722]: Removed session 8. Dec 13 02:23:51.097429 systemd[1]: Started sshd@8-172.31.16.161:22-139.178.68.195:34418.service. Dec 13 02:23:51.270130 sshd[4037]: Accepted publickey for core from 139.178.68.195 port 34418 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:51.274011 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:51.282640 systemd-logind[1722]: New session 9 of user core. Dec 13 02:23:51.283518 systemd[1]: Started session-9.scope. Dec 13 02:23:51.545607 sshd[4037]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:51.549496 systemd-logind[1722]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:23:51.549738 systemd[1]: sshd@8-172.31.16.161:22-139.178.68.195:34418.service: Deactivated successfully. Dec 13 02:23:51.551244 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:23:51.552334 systemd-logind[1722]: Removed session 9. Dec 13 02:23:51.575856 systemd[1]: Started sshd@9-172.31.16.161:22-139.178.68.195:34422.service. Dec 13 02:23:51.759061 sshd[4049]: Accepted publickey for core from 139.178.68.195 port 34422 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:51.760865 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:51.769548 systemd[1]: Started session-10.scope. Dec 13 02:23:51.770655 systemd-logind[1722]: New session 10 of user core. Dec 13 02:23:52.088107 sshd[4049]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:52.105494 systemd[1]: sshd@9-172.31.16.161:22-139.178.68.195:34422.service: Deactivated successfully. Dec 13 02:23:52.108718 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:23:52.109932 systemd-logind[1722]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:23:52.123680 systemd[1]: Started sshd@10-172.31.16.161:22-139.178.68.195:34438.service. Dec 13 02:23:52.126798 systemd-logind[1722]: Removed session 10. Dec 13 02:23:52.312136 sshd[4059]: Accepted publickey for core from 139.178.68.195 port 34438 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:52.316399 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:52.335463 systemd-logind[1722]: New session 11 of user core. Dec 13 02:23:52.336444 systemd[1]: Started session-11.scope. Dec 13 02:23:52.696952 sshd[4059]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:52.704126 systemd[1]: sshd@10-172.31.16.161:22-139.178.68.195:34438.service: Deactivated successfully. Dec 13 02:23:52.705029 systemd-logind[1722]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:23:52.705671 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:23:52.707562 systemd-logind[1722]: Removed session 11. Dec 13 02:23:57.725673 systemd[1]: Started sshd@11-172.31.16.161:22-139.178.68.195:55218.service. Dec 13 02:23:57.891096 sshd[4073]: Accepted publickey for core from 139.178.68.195 port 55218 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:57.892932 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:57.899421 systemd[1]: Started session-12.scope. Dec 13 02:23:57.900080 systemd-logind[1722]: New session 12 of user core. Dec 13 02:23:58.111975 sshd[4073]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:58.117645 systemd[1]: sshd@11-172.31.16.161:22-139.178.68.195:55218.service: Deactivated successfully. Dec 13 02:23:58.118885 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:23:58.121106 systemd-logind[1722]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:23:58.123105 systemd-logind[1722]: Removed session 12. Dec 13 02:24:03.137848 systemd[1]: Started sshd@12-172.31.16.161:22-139.178.68.195:55232.service. Dec 13 02:24:03.293750 sshd[4086]: Accepted publickey for core from 139.178.68.195 port 55232 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:03.295220 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:03.300587 systemd-logind[1722]: New session 13 of user core. Dec 13 02:24:03.301074 systemd[1]: Started session-13.scope. Dec 13 02:24:03.492665 sshd[4086]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:03.496059 systemd-logind[1722]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:24:03.496397 systemd[1]: sshd@12-172.31.16.161:22-139.178.68.195:55232.service: Deactivated successfully. Dec 13 02:24:03.497344 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:24:03.498848 systemd-logind[1722]: Removed session 13. Dec 13 02:24:08.528519 systemd[1]: Started sshd@13-172.31.16.161:22-139.178.68.195:42394.service. Dec 13 02:24:08.694874 sshd[4098]: Accepted publickey for core from 139.178.68.195 port 42394 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:08.696576 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:08.703108 systemd-logind[1722]: New session 14 of user core. Dec 13 02:24:08.703820 systemd[1]: Started session-14.scope. Dec 13 02:24:08.923664 sshd[4098]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:08.928842 systemd-logind[1722]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:24:08.929116 systemd[1]: sshd@13-172.31.16.161:22-139.178.68.195:42394.service: Deactivated successfully. Dec 13 02:24:08.930418 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:24:08.932117 systemd-logind[1722]: Removed session 14. Dec 13 02:24:08.960284 systemd[1]: Started sshd@14-172.31.16.161:22-139.178.68.195:42410.service. Dec 13 02:24:09.148480 sshd[4110]: Accepted publickey for core from 139.178.68.195 port 42410 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:09.151245 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:09.159894 systemd-logind[1722]: New session 15 of user core. Dec 13 02:24:09.160579 systemd[1]: Started session-15.scope. Dec 13 02:24:10.120170 sshd[4110]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:10.124947 systemd[1]: sshd@14-172.31.16.161:22-139.178.68.195:42410.service: Deactivated successfully. Dec 13 02:24:10.126468 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:24:10.128088 systemd-logind[1722]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:24:10.129705 systemd-logind[1722]: Removed session 15. Dec 13 02:24:10.150445 systemd[1]: Started sshd@15-172.31.16.161:22-139.178.68.195:42426.service. Dec 13 02:24:10.322969 sshd[4120]: Accepted publickey for core from 139.178.68.195 port 42426 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:10.326836 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:10.351327 systemd-logind[1722]: New session 16 of user core. Dec 13 02:24:10.352260 systemd[1]: Started session-16.scope. Dec 13 02:24:12.884258 sshd[4120]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:12.900980 systemd[1]: sshd@15-172.31.16.161:22-139.178.68.195:42426.service: Deactivated successfully. Dec 13 02:24:12.910189 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:24:12.910223 systemd-logind[1722]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:24:12.939803 systemd[1]: Started sshd@16-172.31.16.161:22-139.178.68.195:42438.service. Dec 13 02:24:12.944382 systemd-logind[1722]: Removed session 16. Dec 13 02:24:13.155077 sshd[4137]: Accepted publickey for core from 139.178.68.195 port 42438 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:13.157459 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:13.170663 systemd-logind[1722]: New session 17 of user core. Dec 13 02:24:13.170961 systemd[1]: Started session-17.scope. Dec 13 02:24:13.767403 sshd[4137]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:13.770466 systemd[1]: sshd@16-172.31.16.161:22-139.178.68.195:42438.service: Deactivated successfully. Dec 13 02:24:13.771371 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:24:13.772695 systemd-logind[1722]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:24:13.774400 systemd-logind[1722]: Removed session 17. Dec 13 02:24:13.794211 systemd[1]: Started sshd@17-172.31.16.161:22-139.178.68.195:42440.service. Dec 13 02:24:13.958326 sshd[4147]: Accepted publickey for core from 139.178.68.195 port 42440 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:13.959917 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:13.965627 systemd[1]: Started session-18.scope. Dec 13 02:24:13.966835 systemd-logind[1722]: New session 18 of user core. Dec 13 02:24:14.168019 sshd[4147]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:14.171781 systemd-logind[1722]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:24:14.172140 systemd[1]: sshd@17-172.31.16.161:22-139.178.68.195:42440.service: Deactivated successfully. Dec 13 02:24:14.173114 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:24:14.174194 systemd-logind[1722]: Removed session 18. Dec 13 02:24:19.202067 systemd[1]: Started sshd@18-172.31.16.161:22-139.178.68.195:44192.service. Dec 13 02:24:19.377444 sshd[4159]: Accepted publickey for core from 139.178.68.195 port 44192 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:19.379149 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:19.386289 systemd[1]: Started session-19.scope. Dec 13 02:24:19.387143 systemd-logind[1722]: New session 19 of user core. Dec 13 02:24:19.585083 sshd[4159]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:19.589870 systemd[1]: sshd@18-172.31.16.161:22-139.178.68.195:44192.service: Deactivated successfully. Dec 13 02:24:19.590864 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:24:19.591694 systemd-logind[1722]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:24:19.593339 systemd-logind[1722]: Removed session 19. Dec 13 02:24:24.615870 systemd[1]: Started sshd@19-172.31.16.161:22-139.178.68.195:44202.service. Dec 13 02:24:24.784746 sshd[4176]: Accepted publickey for core from 139.178.68.195 port 44202 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:24.786231 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:24.791830 systemd-logind[1722]: New session 20 of user core. Dec 13 02:24:24.792561 systemd[1]: Started session-20.scope. Dec 13 02:24:25.033691 sshd[4176]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:25.037184 systemd[1]: sshd@19-172.31.16.161:22-139.178.68.195:44202.service: Deactivated successfully. Dec 13 02:24:25.038709 systemd-logind[1722]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:24:25.038799 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:24:25.040481 systemd-logind[1722]: Removed session 20. Dec 13 02:24:30.061827 systemd[1]: Started sshd@20-172.31.16.161:22-139.178.68.195:57078.service. Dec 13 02:24:30.222913 sshd[4188]: Accepted publickey for core from 139.178.68.195 port 57078 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:30.224459 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:30.230824 systemd[1]: Started session-21.scope. Dec 13 02:24:30.232504 systemd-logind[1722]: New session 21 of user core. Dec 13 02:24:30.432336 sshd[4188]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:30.437761 systemd-logind[1722]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:24:30.438116 systemd[1]: sshd@20-172.31.16.161:22-139.178.68.195:57078.service: Deactivated successfully. Dec 13 02:24:30.439150 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:24:30.440494 systemd-logind[1722]: Removed session 21. Dec 13 02:24:35.460504 systemd[1]: Started sshd@21-172.31.16.161:22-139.178.68.195:57080.service. Dec 13 02:24:35.633479 sshd[4200]: Accepted publickey for core from 139.178.68.195 port 57080 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:35.635202 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:35.641226 systemd[1]: Started session-22.scope. Dec 13 02:24:35.641746 systemd-logind[1722]: New session 22 of user core. Dec 13 02:24:35.849403 sshd[4200]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:35.853038 systemd[1]: sshd@21-172.31.16.161:22-139.178.68.195:57080.service: Deactivated successfully. Dec 13 02:24:35.854184 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:24:35.855301 systemd-logind[1722]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:24:35.856273 systemd-logind[1722]: Removed session 22. Dec 13 02:24:35.876456 systemd[1]: Started sshd@22-172.31.16.161:22-139.178.68.195:57082.service. Dec 13 02:24:36.044191 sshd[4212]: Accepted publickey for core from 139.178.68.195 port 57082 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:36.046635 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:36.052465 systemd[1]: Started session-23.scope. Dec 13 02:24:36.053416 systemd-logind[1722]: New session 23 of user core. Dec 13 02:24:38.326543 env[1728]: time="2024-12-13T02:24:38.322781979Z" level=info msg="StopContainer for \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\" with timeout 30 (s)" Dec 13 02:24:38.329671 env[1728]: time="2024-12-13T02:24:38.327450048Z" level=info msg="Stop container \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\" with signal terminated" Dec 13 02:24:38.349646 systemd[1]: cri-containerd-58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516.scope: Deactivated successfully. Dec 13 02:24:38.362080 systemd[1]: run-containerd-runc-k8s.io-6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83-runc.xrquI2.mount: Deactivated successfully. Dec 13 02:24:38.403539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516-rootfs.mount: Deactivated successfully. Dec 13 02:24:38.416440 env[1728]: time="2024-12-13T02:24:38.415620139Z" level=info msg="shim disconnected" id=58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516 Dec 13 02:24:38.416440 env[1728]: time="2024-12-13T02:24:38.415674462Z" level=warning msg="cleaning up after shim disconnected" id=58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516 namespace=k8s.io Dec 13 02:24:38.416440 env[1728]: time="2024-12-13T02:24:38.415688937Z" level=info msg="cleaning up dead shim" Dec 13 02:24:38.422358 env[1728]: time="2024-12-13T02:24:38.422280883Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:24:38.442839 env[1728]: time="2024-12-13T02:24:38.442793892Z" level=info msg="StopContainer for \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\" with timeout 2 (s)" Dec 13 02:24:38.443556 env[1728]: time="2024-12-13T02:24:38.443527294Z" level=info msg="Stop container \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\" with signal terminated" Dec 13 02:24:38.454333 env[1728]: time="2024-12-13T02:24:38.454284438Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4255 runtime=io.containerd.runc.v2\n" Dec 13 02:24:38.457360 env[1728]: time="2024-12-13T02:24:38.457231683Z" level=info msg="StopContainer for \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\" returns successfully" Dec 13 02:24:38.458373 env[1728]: time="2024-12-13T02:24:38.458343152Z" level=info msg="StopPodSandbox for \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\"" Dec 13 02:24:38.458664 env[1728]: time="2024-12-13T02:24:38.458636657Z" level=info msg="Container to stop \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:38.462355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa-shm.mount: Deactivated successfully. Dec 13 02:24:38.464622 systemd-networkd[1462]: lxc_health: Link DOWN Dec 13 02:24:38.464633 systemd-networkd[1462]: lxc_health: Lost carrier Dec 13 02:24:38.564920 systemd[1]: cri-containerd-c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa.scope: Deactivated successfully. Dec 13 02:24:38.591939 systemd[1]: cri-containerd-6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83.scope: Deactivated successfully. Dec 13 02:24:38.592255 systemd[1]: cri-containerd-6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83.scope: Consumed 8.731s CPU time. Dec 13 02:24:38.605490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa-rootfs.mount: Deactivated successfully. Dec 13 02:24:38.619542 env[1728]: time="2024-12-13T02:24:38.619466161Z" level=info msg="shim disconnected" id=c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa Dec 13 02:24:38.619817 env[1728]: time="2024-12-13T02:24:38.619794712Z" level=warning msg="cleaning up after shim disconnected" id=c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa namespace=k8s.io Dec 13 02:24:38.620055 env[1728]: time="2024-12-13T02:24:38.620034194Z" level=info msg="cleaning up dead shim" Dec 13 02:24:38.634637 env[1728]: time="2024-12-13T02:24:38.634587267Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4311 runtime=io.containerd.runc.v2\n" Dec 13 02:24:38.634980 env[1728]: time="2024-12-13T02:24:38.634946436Z" level=info msg="TearDown network for sandbox \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\" successfully" Dec 13 02:24:38.635073 env[1728]: time="2024-12-13T02:24:38.634980701Z" level=info msg="StopPodSandbox for \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\" returns successfully" Dec 13 02:24:38.638395 env[1728]: time="2024-12-13T02:24:38.638338937Z" level=info msg="shim disconnected" id=6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83 Dec 13 02:24:38.638585 env[1728]: time="2024-12-13T02:24:38.638561447Z" level=warning msg="cleaning up after shim disconnected" id=6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83 namespace=k8s.io Dec 13 02:24:38.638711 env[1728]: time="2024-12-13T02:24:38.638693565Z" level=info msg="cleaning up dead shim" Dec 13 02:24:38.649397 kubelet[2563]: I1213 02:24:38.649348 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8863b29-673f-45b4-bb60-4110267ae34d-cilium-config-path\") pod \"b8863b29-673f-45b4-bb60-4110267ae34d\" (UID: \"b8863b29-673f-45b4-bb60-4110267ae34d\") " Dec 13 02:24:38.649926 kubelet[2563]: I1213 02:24:38.649477 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7gt\" (UniqueName: \"kubernetes.io/projected/b8863b29-673f-45b4-bb60-4110267ae34d-kube-api-access-zd7gt\") pod \"b8863b29-673f-45b4-bb60-4110267ae34d\" (UID: \"b8863b29-673f-45b4-bb60-4110267ae34d\") " Dec 13 02:24:38.663903 kubelet[2563]: I1213 02:24:38.659359 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8863b29-673f-45b4-bb60-4110267ae34d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b8863b29-673f-45b4-bb60-4110267ae34d" (UID: "b8863b29-673f-45b4-bb60-4110267ae34d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:24:38.665624 env[1728]: time="2024-12-13T02:24:38.665498603Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4324 runtime=io.containerd.runc.v2\n" Dec 13 02:24:38.669961 env[1728]: time="2024-12-13T02:24:38.669907400Z" level=info msg="StopContainer for \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\" returns successfully" Dec 13 02:24:38.672920 env[1728]: time="2024-12-13T02:24:38.672877249Z" level=info msg="StopPodSandbox for \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\"" Dec 13 02:24:38.673061 env[1728]: time="2024-12-13T02:24:38.672956019Z" level=info msg="Container to stop \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:38.673061 env[1728]: time="2024-12-13T02:24:38.672977648Z" level=info msg="Container to stop \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:38.673061 env[1728]: time="2024-12-13T02:24:38.672992682Z" level=info msg="Container to stop \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:38.673061 env[1728]: time="2024-12-13T02:24:38.673009031Z" level=info msg="Container to stop \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:38.673061 env[1728]: time="2024-12-13T02:24:38.673025298Z" level=info msg="Container to stop \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:38.674177 kubelet[2563]: I1213 02:24:38.674140 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8863b29-673f-45b4-bb60-4110267ae34d-kube-api-access-zd7gt" (OuterVolumeSpecName: "kube-api-access-zd7gt") pod "b8863b29-673f-45b4-bb60-4110267ae34d" (UID: "b8863b29-673f-45b4-bb60-4110267ae34d"). InnerVolumeSpecName "kube-api-access-zd7gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:38.681776 systemd[1]: cri-containerd-17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034.scope: Deactivated successfully. Dec 13 02:24:38.722355 env[1728]: time="2024-12-13T02:24:38.722289203Z" level=info msg="shim disconnected" id=17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034 Dec 13 02:24:38.722355 env[1728]: time="2024-12-13T02:24:38.722352085Z" level=warning msg="cleaning up after shim disconnected" id=17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034 namespace=k8s.io Dec 13 02:24:38.722355 env[1728]: time="2024-12-13T02:24:38.722364563Z" level=info msg="cleaning up dead shim" Dec 13 02:24:38.732441 env[1728]: time="2024-12-13T02:24:38.732382624Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4357 runtime=io.containerd.runc.v2\n" Dec 13 02:24:38.733194 env[1728]: time="2024-12-13T02:24:38.733149180Z" level=info msg="TearDown network for sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" successfully" Dec 13 02:24:38.733359 env[1728]: time="2024-12-13T02:24:38.733226013Z" level=info msg="StopPodSandbox for \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" returns successfully" Dec 13 02:24:38.750906 kubelet[2563]: I1213 02:24:38.750826 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-etc-cni-netd\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.751422 kubelet[2563]: I1213 02:24:38.751383 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-host-proc-sys-kernel\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.751610 kubelet[2563]: I1213 02:24:38.751595 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-config-path\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.751801 kubelet[2563]: I1213 02:24:38.751758 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-lib-modules\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.751978 kubelet[2563]: I1213 02:24:38.751894 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cni-path\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.751978 kubelet[2563]: I1213 02:24:38.751920 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-cgroup\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.752292 kubelet[2563]: I1213 02:24:38.752055 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-clustermesh-secrets\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.752292 kubelet[2563]: I1213 02:24:38.752086 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsmj7\" (UniqueName: \"kubernetes.io/projected/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-kube-api-access-qsmj7\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.752292 kubelet[2563]: I1213 02:24:38.752213 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-xtables-lock\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.752292 kubelet[2563]: I1213 02:24:38.752237 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-hostproc\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.752292 kubelet[2563]: I1213 02:24:38.752257 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-host-proc-sys-net\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.752990 kubelet[2563]: I1213 02:24:38.752385 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-run\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.752990 kubelet[2563]: I1213 02:24:38.752407 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-bpf-maps\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.752990 kubelet[2563]: I1213 02:24:38.752546 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-hubble-tls\") pod \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\" (UID: \"ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff\") " Dec 13 02:24:38.752990 kubelet[2563]: I1213 02:24:38.752715 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8863b29-673f-45b4-bb60-4110267ae34d-cilium-config-path\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.752990 kubelet[2563]: I1213 02:24:38.752735 2563 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zd7gt\" (UniqueName: \"kubernetes.io/projected/b8863b29-673f-45b4-bb60-4110267ae34d-kube-api-access-zd7gt\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.753672 kubelet[2563]: I1213 02:24:38.753568 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.753672 kubelet[2563]: I1213 02:24:38.753632 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.756805 kubelet[2563]: I1213 02:24:38.756774 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.757083 kubelet[2563]: I1213 02:24:38.756956 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cni-path" (OuterVolumeSpecName: "cni-path") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.757083 kubelet[2563]: I1213 02:24:38.756983 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.757083 kubelet[2563]: I1213 02:24:38.757021 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.757083 kubelet[2563]: I1213 02:24:38.757043 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-hostproc" (OuterVolumeSpecName: "hostproc") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.757083 kubelet[2563]: I1213 02:24:38.757063 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.757688 kubelet[2563]: I1213 02:24:38.757544 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.757688 kubelet[2563]: I1213 02:24:38.757625 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:38.763643 kubelet[2563]: I1213 02:24:38.763603 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:24:38.765330 kubelet[2563]: I1213 02:24:38.765220 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-kube-api-access-qsmj7" (OuterVolumeSpecName: "kube-api-access-qsmj7") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "kube-api-access-qsmj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:38.766651 kubelet[2563]: I1213 02:24:38.766619 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:38.772895 kubelet[2563]: I1213 02:24:38.772862 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" (UID: "ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:24:38.854028 kubelet[2563]: I1213 02:24:38.853706 2563 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-etc-cni-netd\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.854028 kubelet[2563]: I1213 02:24:38.853744 2563 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-host-proc-sys-kernel\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.854028 kubelet[2563]: I1213 02:24:38.853800 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-config-path\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.854028 kubelet[2563]: I1213 02:24:38.853811 2563 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-lib-modules\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.854028 kubelet[2563]: I1213 02:24:38.853823 2563 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cni-path\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.854028 kubelet[2563]: I1213 02:24:38.853834 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-cgroup\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.854028 kubelet[2563]: I1213 02:24:38.853981 2563 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-clustermesh-secrets\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.855249 kubelet[2563]: I1213 02:24:38.854005 2563 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qsmj7\" (UniqueName: \"kubernetes.io/projected/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-kube-api-access-qsmj7\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.855249 kubelet[2563]: I1213 02:24:38.854866 2563 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-xtables-lock\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.855249 kubelet[2563]: I1213 02:24:38.854883 2563 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-hostproc\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.855249 kubelet[2563]: I1213 02:24:38.854908 2563 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-host-proc-sys-net\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.855249 kubelet[2563]: I1213 02:24:38.854922 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-cilium-run\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.855249 kubelet[2563]: I1213 02:24:38.854935 2563 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-bpf-maps\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.855249 kubelet[2563]: I1213 02:24:38.854946 2563 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff-hubble-tls\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:38.956167 systemd[1]: Removed slice kubepods-besteffort-podb8863b29_673f_45b4_bb60_4110267ae34d.slice. Dec 13 02:24:38.957318 kubelet[2563]: I1213 02:24:38.957003 2563 scope.go:117] "RemoveContainer" containerID="58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516" Dec 13 02:24:38.962179 env[1728]: time="2024-12-13T02:24:38.962005773Z" level=info msg="RemoveContainer for \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\"" Dec 13 02:24:38.971698 env[1728]: time="2024-12-13T02:24:38.971645213Z" level=info msg="RemoveContainer for \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\" returns successfully" Dec 13 02:24:38.975666 kubelet[2563]: I1213 02:24:38.975637 2563 scope.go:117] "RemoveContainer" containerID="58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516" Dec 13 02:24:38.978912 systemd[1]: Removed slice kubepods-burstable-podddbe28ac_0859_4e64_93fa_d3d7fbbad4ff.slice. Dec 13 02:24:38.979131 systemd[1]: kubepods-burstable-podddbe28ac_0859_4e64_93fa_d3d7fbbad4ff.slice: Consumed 8.855s CPU time. Dec 13 02:24:38.984428 env[1728]: time="2024-12-13T02:24:38.984325878Z" level=error msg="ContainerStatus for \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\": not found" Dec 13 02:24:38.986447 kubelet[2563]: E1213 02:24:38.986411 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\": not found" containerID="58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516" Dec 13 02:24:38.986676 kubelet[2563]: I1213 02:24:38.986465 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516"} err="failed to get container status \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\": rpc error: code = NotFound desc = an error occurred when try to find container \"58315738b84d244282c62bed5cf86df5c43bc6ef981569fd1827b603f727d516\": not found" Dec 13 02:24:38.986676 kubelet[2563]: I1213 02:24:38.986646 2563 scope.go:117] "RemoveContainer" containerID="6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83" Dec 13 02:24:38.992587 env[1728]: time="2024-12-13T02:24:38.991896427Z" level=info msg="RemoveContainer for \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\"" Dec 13 02:24:39.002894 env[1728]: time="2024-12-13T02:24:39.002809498Z" level=info msg="RemoveContainer for \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\" returns successfully" Dec 13 02:24:39.003354 kubelet[2563]: I1213 02:24:39.003328 2563 scope.go:117] "RemoveContainer" containerID="d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29" Dec 13 02:24:39.011357 env[1728]: time="2024-12-13T02:24:39.009619630Z" level=info msg="RemoveContainer for \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\"" Dec 13 02:24:39.018112 env[1728]: time="2024-12-13T02:24:39.018060143Z" level=info msg="RemoveContainer for \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\" returns successfully" Dec 13 02:24:39.018619 kubelet[2563]: I1213 02:24:39.018597 2563 scope.go:117] "RemoveContainer" containerID="4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae" Dec 13 02:24:39.022364 env[1728]: time="2024-12-13T02:24:39.022319832Z" level=info msg="RemoveContainer for \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\"" Dec 13 02:24:39.027678 env[1728]: time="2024-12-13T02:24:39.027627686Z" level=info msg="RemoveContainer for \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\" returns successfully" Dec 13 02:24:39.027869 kubelet[2563]: I1213 02:24:39.027845 2563 scope.go:117] "RemoveContainer" containerID="b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f" Dec 13 02:24:39.029653 env[1728]: time="2024-12-13T02:24:39.029608755Z" level=info msg="RemoveContainer for \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\"" Dec 13 02:24:39.035915 env[1728]: time="2024-12-13T02:24:39.035872687Z" level=info msg="RemoveContainer for \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\" returns successfully" Dec 13 02:24:39.036403 kubelet[2563]: I1213 02:24:39.036342 2563 scope.go:117] "RemoveContainer" containerID="5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde" Dec 13 02:24:39.038600 env[1728]: time="2024-12-13T02:24:39.038374009Z" level=info msg="RemoveContainer for \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\"" Dec 13 02:24:39.043934 env[1728]: time="2024-12-13T02:24:39.043887326Z" level=info msg="RemoveContainer for \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\" returns successfully" Dec 13 02:24:39.044271 kubelet[2563]: I1213 02:24:39.044247 2563 scope.go:117] "RemoveContainer" containerID="6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83" Dec 13 02:24:39.044693 env[1728]: time="2024-12-13T02:24:39.044626801Z" level=error msg="ContainerStatus for \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\": not found" Dec 13 02:24:39.044840 kubelet[2563]: E1213 02:24:39.044813 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\": not found" containerID="6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83" Dec 13 02:24:39.044922 kubelet[2563]: I1213 02:24:39.044851 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83"} err="failed to get container status \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83\": not found" Dec 13 02:24:39.044922 kubelet[2563]: I1213 02:24:39.044880 2563 scope.go:117] "RemoveContainer" containerID="d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29" Dec 13 02:24:39.045134 env[1728]: time="2024-12-13T02:24:39.045077778Z" level=error msg="ContainerStatus for \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\": not found" Dec 13 02:24:39.045343 kubelet[2563]: E1213 02:24:39.045248 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\": not found" containerID="d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29" Dec 13 02:24:39.045423 kubelet[2563]: I1213 02:24:39.045347 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29"} err="failed to get container status \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\": rpc error: code = NotFound desc = an error occurred when try to find container \"d526b6cd5c34acfde4d6472c146ca5891b36a394a37ffac6c9902d009cd39c29\": not found" Dec 13 02:24:39.045423 kubelet[2563]: I1213 02:24:39.045373 2563 scope.go:117] "RemoveContainer" containerID="4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae" Dec 13 02:24:39.045725 env[1728]: time="2024-12-13T02:24:39.045667453Z" level=error msg="ContainerStatus for \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\": not found" Dec 13 02:24:39.045855 kubelet[2563]: E1213 02:24:39.045830 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\": not found" containerID="4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae" Dec 13 02:24:39.045942 kubelet[2563]: I1213 02:24:39.045858 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae"} err="failed to get container status \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d9b10fb2cf52b2be9bb1c4736c1226539edb8bbeebd2b81d1cb56287efb8cae\": not found" Dec 13 02:24:39.045942 kubelet[2563]: I1213 02:24:39.045878 2563 scope.go:117] "RemoveContainer" containerID="b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f" Dec 13 02:24:39.046139 env[1728]: time="2024-12-13T02:24:39.046081215Z" level=error msg="ContainerStatus for \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\": not found" Dec 13 02:24:39.046247 kubelet[2563]: E1213 02:24:39.046220 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\": not found" containerID="b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f" Dec 13 02:24:39.046324 kubelet[2563]: I1213 02:24:39.046250 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f"} err="failed to get container status \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b08576a0e289933e2f06ea6105f399a07836518ddfd04288d284e2097076f51f\": not found" Dec 13 02:24:39.046324 kubelet[2563]: I1213 02:24:39.046270 2563 scope.go:117] "RemoveContainer" containerID="5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde" Dec 13 02:24:39.046544 env[1728]: time="2024-12-13T02:24:39.046475628Z" level=error msg="ContainerStatus for \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\": not found" Dec 13 02:24:39.046681 kubelet[2563]: E1213 02:24:39.046655 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\": not found" containerID="5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde" Dec 13 02:24:39.046761 kubelet[2563]: I1213 02:24:39.046683 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde"} err="failed to get container status \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\": rpc error: code = NotFound desc = an error occurred when try to find container \"5678f73ce1b1ee456fbb0fc69bc3d6438c05dfed4889ca56943d5c43b8d5ccde\": not found" Dec 13 02:24:39.356347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e970a59567ef0f528330009892e99795053cb7047445a66469ad315f8c19f83-rootfs.mount: Deactivated successfully. Dec 13 02:24:39.356476 systemd[1]: var-lib-kubelet-pods-b8863b29\x2d673f\x2d45b4\x2dbb60\x2d4110267ae34d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzd7gt.mount: Deactivated successfully. Dec 13 02:24:39.356601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034-rootfs.mount: Deactivated successfully. Dec 13 02:24:39.356681 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034-shm.mount: Deactivated successfully. Dec 13 02:24:39.356758 systemd[1]: var-lib-kubelet-pods-ddbe28ac\x2d0859\x2d4e64\x2d93fa\x2dd3d7fbbad4ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqsmj7.mount: Deactivated successfully. Dec 13 02:24:39.356843 systemd[1]: var-lib-kubelet-pods-ddbe28ac\x2d0859\x2d4e64\x2d93fa\x2dd3d7fbbad4ff-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:24:39.356984 systemd[1]: var-lib-kubelet-pods-ddbe28ac\x2d0859\x2d4e64\x2d93fa\x2dd3d7fbbad4ff-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:24:39.544053 kubelet[2563]: I1213 02:24:39.543695 2563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8863b29-673f-45b4-bb60-4110267ae34d" path="/var/lib/kubelet/pods/b8863b29-673f-45b4-bb60-4110267ae34d/volumes" Dec 13 02:24:39.547599 kubelet[2563]: I1213 02:24:39.547442 2563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" path="/var/lib/kubelet/pods/ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff/volumes" Dec 13 02:24:40.267776 sshd[4212]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:40.273139 systemd[1]: sshd@22-172.31.16.161:22-139.178.68.195:57082.service: Deactivated successfully. Dec 13 02:24:40.275910 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:24:40.283776 systemd-logind[1722]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:24:40.285001 systemd-logind[1722]: Removed session 23. Dec 13 02:24:40.295254 systemd[1]: Started sshd@23-172.31.16.161:22-139.178.68.195:46930.service. Dec 13 02:24:40.502506 sshd[4375]: Accepted publickey for core from 139.178.68.195 port 46930 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:40.504370 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:40.527740 systemd-logind[1722]: New session 24 of user core. Dec 13 02:24:40.528627 systemd[1]: Started session-24.scope. Dec 13 02:24:41.363486 sshd[4375]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:41.367065 systemd[1]: sshd@23-172.31.16.161:22-139.178.68.195:46930.service: Deactivated successfully. Dec 13 02:24:41.368797 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:24:41.371006 systemd-logind[1722]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:24:41.372932 systemd-logind[1722]: Removed session 24. Dec 13 02:24:41.394682 systemd[1]: Started sshd@24-172.31.16.161:22-139.178.68.195:46936.service. Dec 13 02:24:41.474759 kubelet[2563]: E1213 02:24:41.474721 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" containerName="mount-bpf-fs" Dec 13 02:24:41.475590 kubelet[2563]: E1213 02:24:41.475572 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8863b29-673f-45b4-bb60-4110267ae34d" containerName="cilium-operator" Dec 13 02:24:41.475701 kubelet[2563]: E1213 02:24:41.475690 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" containerName="clean-cilium-state" Dec 13 02:24:41.475785 kubelet[2563]: E1213 02:24:41.475774 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" containerName="cilium-agent" Dec 13 02:24:41.475864 kubelet[2563]: E1213 02:24:41.475855 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" containerName="apply-sysctl-overwrites" Dec 13 02:24:41.475951 kubelet[2563]: E1213 02:24:41.475942 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" containerName="mount-cgroup" Dec 13 02:24:41.480054 kubelet[2563]: I1213 02:24:41.480022 2563 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddbe28ac-0859-4e64-93fa-d3d7fbbad4ff" containerName="cilium-agent" Dec 13 02:24:41.480258 kubelet[2563]: I1213 02:24:41.480246 2563 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8863b29-673f-45b4-bb60-4110267ae34d" containerName="cilium-operator" Dec 13 02:24:41.481353 kubelet[2563]: I1213 02:24:41.481327 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-hubble-tls\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.481551 kubelet[2563]: I1213 02:24:41.481533 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f7s2\" (UniqueName: \"kubernetes.io/projected/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-kube-api-access-8f7s2\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.481668 kubelet[2563]: I1213 02:24:41.481652 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-config-path\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.481777 kubelet[2563]: I1213 02:24:41.481764 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-host-proc-sys-net\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.481884 kubelet[2563]: I1213 02:24:41.481872 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-run\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.481963 kubelet[2563]: I1213 02:24:41.481952 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-bpf-maps\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.482057 kubelet[2563]: I1213 02:24:41.482043 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-xtables-lock\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.484073 kubelet[2563]: I1213 02:24:41.484042 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-clustermesh-secrets\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.484808 kubelet[2563]: I1213 02:24:41.484377 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-etc-cni-netd\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.484954 kubelet[2563]: I1213 02:24:41.484936 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-host-proc-sys-kernel\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.495804 kubelet[2563]: I1213 02:24:41.495766 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cni-path\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.497891 kubelet[2563]: I1213 02:24:41.497863 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-ipsec-secrets\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.502360 systemd[1]: Created slice kubepods-burstable-podd0866bef_65f6_4a5a_8770_5d1ab3a9cb20.slice. Dec 13 02:24:41.507899 kubelet[2563]: I1213 02:24:41.507867 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-cgroup\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.508131 kubelet[2563]: I1213 02:24:41.508111 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-lib-modules\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.508273 kubelet[2563]: I1213 02:24:41.508258 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-hostproc\") pod \"cilium-cxx65\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " pod="kube-system/cilium-cxx65" Dec 13 02:24:41.573731 sshd[4385]: Accepted publickey for core from 139.178.68.195 port 46936 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:41.575629 sshd[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:41.581834 systemd-logind[1722]: New session 25 of user core. Dec 13 02:24:41.583223 systemd[1]: Started session-25.scope. Dec 13 02:24:41.809350 env[1728]: time="2024-12-13T02:24:41.809301706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxx65,Uid:d0866bef-65f6-4a5a-8770-5d1ab3a9cb20,Namespace:kube-system,Attempt:0,}" Dec 13 02:24:41.853187 env[1728]: time="2024-12-13T02:24:41.852925469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:24:41.853187 env[1728]: time="2024-12-13T02:24:41.853148581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:24:41.853187 env[1728]: time="2024-12-13T02:24:41.853166882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:24:41.853821 env[1728]: time="2024-12-13T02:24:41.853448433Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4 pid=4404 runtime=io.containerd.runc.v2 Dec 13 02:24:41.877104 systemd[1]: Started cri-containerd-444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4.scope. Dec 13 02:24:41.938756 env[1728]: time="2024-12-13T02:24:41.938702461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxx65,Uid:d0866bef-65f6-4a5a-8770-5d1ab3a9cb20,Namespace:kube-system,Attempt:0,} returns sandbox id \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\"" Dec 13 02:24:41.959534 env[1728]: time="2024-12-13T02:24:41.955667514Z" level=info msg="CreateContainer within sandbox \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:24:41.989402 env[1728]: time="2024-12-13T02:24:41.989350526Z" level=info msg="CreateContainer within sandbox \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d\"" Dec 13 02:24:41.990910 env[1728]: time="2024-12-13T02:24:41.990859125Z" level=info msg="StartContainer for \"4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d\"" Dec 13 02:24:42.052456 systemd[1]: Started cri-containerd-4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d.scope. Dec 13 02:24:42.093339 systemd[1]: cri-containerd-4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d.scope: Deactivated successfully. Dec 13 02:24:42.134218 env[1728]: time="2024-12-13T02:24:42.134142880Z" level=info msg="shim disconnected" id=4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d Dec 13 02:24:42.134218 env[1728]: time="2024-12-13T02:24:42.134212370Z" level=warning msg="cleaning up after shim disconnected" id=4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d namespace=k8s.io Dec 13 02:24:42.134218 env[1728]: time="2024-12-13T02:24:42.134224872Z" level=info msg="cleaning up dead shim" Dec 13 02:24:42.146925 sshd[4385]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:42.151981 systemd[1]: sshd@24-172.31.16.161:22-139.178.68.195:46936.service: Deactivated successfully. Dec 13 02:24:42.153288 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:24:42.158607 systemd-logind[1722]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:24:42.160598 systemd-logind[1722]: Removed session 25. Dec 13 02:24:42.165675 env[1728]: time="2024-12-13T02:24:42.165623114Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4464 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:24:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:24:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:24:42.167728 env[1728]: time="2024-12-13T02:24:42.167584897Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Dec 13 02:24:42.168820 env[1728]: time="2024-12-13T02:24:42.168755205Z" level=error msg="Failed to pipe stderr of container \"4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d\"" error="reading from a closed fifo" Dec 13 02:24:42.168968 env[1728]: time="2024-12-13T02:24:42.168870852Z" level=error msg="Failed to pipe stdout of container \"4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d\"" error="reading from a closed fifo" Dec 13 02:24:42.178310 systemd[1]: Started sshd@25-172.31.16.161:22-139.178.68.195:46942.service. Dec 13 02:24:42.190595 env[1728]: time="2024-12-13T02:24:42.190481700Z" level=error msg="StartContainer for \"4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:24:42.191358 kubelet[2563]: E1213 02:24:42.191016 2563 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d" Dec 13 02:24:42.198713 kubelet[2563]: E1213 02:24:42.196499 2563 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 02:24:42.198713 kubelet[2563]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:24:42.198713 kubelet[2563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:24:42.198713 kubelet[2563]: rm /hostbin/cilium-mount Dec 13 02:24:42.199019 kubelet[2563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8f7s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-cxx65_kube-system(d0866bef-65f6-4a5a-8770-5d1ab3a9cb20): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:24:42.199019 kubelet[2563]: > logger="UnhandledError" Dec 13 02:24:42.199866 kubelet[2563]: E1213 02:24:42.199794 2563 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cxx65" podUID="d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" Dec 13 02:24:42.372774 sshd[4481]: Accepted publickey for core from 139.178.68.195 port 46942 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:42.377014 sshd[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:42.389563 systemd[1]: Started session-26.scope. Dec 13 02:24:42.390259 systemd-logind[1722]: New session 26 of user core. Dec 13 02:24:42.721949 kubelet[2563]: E1213 02:24:42.721895 2563 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:24:42.995244 env[1728]: time="2024-12-13T02:24:42.994966715Z" level=info msg="StopPodSandbox for \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\"" Dec 13 02:24:42.995244 env[1728]: time="2024-12-13T02:24:42.995032788Z" level=info msg="Container to stop \"4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:42.999824 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4-shm.mount: Deactivated successfully. Dec 13 02:24:43.021121 systemd[1]: cri-containerd-444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4.scope: Deactivated successfully. Dec 13 02:24:43.061446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4-rootfs.mount: Deactivated successfully. Dec 13 02:24:43.092490 env[1728]: time="2024-12-13T02:24:43.092023637Z" level=info msg="shim disconnected" id=444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4 Dec 13 02:24:43.093206 env[1728]: time="2024-12-13T02:24:43.092812636Z" level=warning msg="cleaning up after shim disconnected" id=444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4 namespace=k8s.io Dec 13 02:24:43.093206 env[1728]: time="2024-12-13T02:24:43.092864240Z" level=info msg="cleaning up dead shim" Dec 13 02:24:43.102674 env[1728]: time="2024-12-13T02:24:43.102628762Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4509 runtime=io.containerd.runc.v2\n" Dec 13 02:24:43.103085 env[1728]: time="2024-12-13T02:24:43.103049248Z" level=info msg="TearDown network for sandbox \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\" successfully" Dec 13 02:24:43.103085 env[1728]: time="2024-12-13T02:24:43.103080072Z" level=info msg="StopPodSandbox for \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\" returns successfully" Dec 13 02:24:43.226279 kubelet[2563]: I1213 02:24:43.226239 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-hostproc\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.226505 kubelet[2563]: I1213 02:24:43.226287 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-cgroup\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.226505 kubelet[2563]: I1213 02:24:43.226309 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-lib-modules\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.226505 kubelet[2563]: I1213 02:24:43.226339 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f7s2\" (UniqueName: \"kubernetes.io/projected/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-kube-api-access-8f7s2\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.226505 kubelet[2563]: I1213 02:24:43.226360 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cni-path\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.226505 kubelet[2563]: I1213 02:24:43.226383 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-hubble-tls\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.226505 kubelet[2563]: I1213 02:24:43.226406 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-clustermesh-secrets\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.226505 kubelet[2563]: I1213 02:24:43.226432 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-config-path\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.226505 kubelet[2563]: I1213 02:24:43.226452 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-run\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.226505 kubelet[2563]: I1213 02:24:43.226497 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-ipsec-secrets\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.227027 kubelet[2563]: I1213 02:24:43.226544 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-host-proc-sys-kernel\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.227027 kubelet[2563]: I1213 02:24:43.226567 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-etc-cni-netd\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.227027 kubelet[2563]: I1213 02:24:43.226593 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-host-proc-sys-net\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.227027 kubelet[2563]: I1213 02:24:43.226615 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-xtables-lock\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.227027 kubelet[2563]: I1213 02:24:43.226646 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-bpf-maps\") pod \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\" (UID: \"d0866bef-65f6-4a5a-8770-5d1ab3a9cb20\") " Dec 13 02:24:43.227027 kubelet[2563]: I1213 02:24:43.226735 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.227027 kubelet[2563]: I1213 02:24:43.226767 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.227027 kubelet[2563]: I1213 02:24:43.226787 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.227784 kubelet[2563]: I1213 02:24:43.227415 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-hostproc" (OuterVolumeSpecName: "hostproc") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.227784 kubelet[2563]: I1213 02:24:43.227471 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.227784 kubelet[2563]: I1213 02:24:43.227494 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cni-path" (OuterVolumeSpecName: "cni-path") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.229124 kubelet[2563]: I1213 02:24:43.229093 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.229286 kubelet[2563]: I1213 02:24:43.229268 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.229460 kubelet[2563]: I1213 02:24:43.229444 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.229601 kubelet[2563]: I1213 02:24:43.229585 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:43.237116 systemd[1]: var-lib-kubelet-pods-d0866bef\x2d65f6\x2d4a5a\x2d8770\x2d5d1ab3a9cb20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8f7s2.mount: Deactivated successfully. Dec 13 02:24:43.248917 kubelet[2563]: I1213 02:24:43.248383 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-kube-api-access-8f7s2" (OuterVolumeSpecName: "kube-api-access-8f7s2") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "kube-api-access-8f7s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:43.263409 systemd[1]: var-lib-kubelet-pods-d0866bef\x2d65f6\x2d4a5a\x2d8770\x2d5d1ab3a9cb20-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:24:43.270709 kubelet[2563]: I1213 02:24:43.270666 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:24:43.271382 kubelet[2563]: I1213 02:24:43.271135 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:24:43.271553 kubelet[2563]: I1213 02:24:43.271313 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:43.285036 kubelet[2563]: I1213 02:24:43.284781 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" (UID: "d0866bef-65f6-4a5a-8770-5d1ab3a9cb20"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:24:43.327827 kubelet[2563]: I1213 02:24:43.327780 2563 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-etc-cni-netd\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.327827 kubelet[2563]: I1213 02:24:43.327816 2563 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-host-proc-sys-net\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.327827 kubelet[2563]: I1213 02:24:43.327835 2563 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-xtables-lock\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327848 2563 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-bpf-maps\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327861 2563 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8f7s2\" (UniqueName: \"kubernetes.io/projected/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-kube-api-access-8f7s2\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327872 2563 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-hostproc\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327882 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-cgroup\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327892 2563 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-lib-modules\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327901 2563 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-hubble-tls\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327911 2563 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cni-path\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327921 2563 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-clustermesh-secrets\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327931 2563 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-host-proc-sys-kernel\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327942 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-config-path\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327954 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-run\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.328101 kubelet[2563]: I1213 02:24:43.327964 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20-cilium-ipsec-secrets\") on node \"ip-172-31-16-161\" DevicePath \"\"" Dec 13 02:24:43.538474 systemd[1]: Removed slice kubepods-burstable-podd0866bef_65f6_4a5a_8770_5d1ab3a9cb20.slice. Dec 13 02:24:43.629123 systemd[1]: var-lib-kubelet-pods-d0866bef\x2d65f6\x2d4a5a\x2d8770\x2d5d1ab3a9cb20-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:24:43.629255 systemd[1]: var-lib-kubelet-pods-d0866bef\x2d65f6\x2d4a5a\x2d8770\x2d5d1ab3a9cb20-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:24:44.010038 kubelet[2563]: I1213 02:24:44.009135 2563 scope.go:117] "RemoveContainer" containerID="4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d" Dec 13 02:24:44.014479 env[1728]: time="2024-12-13T02:24:44.014413782Z" level=info msg="RemoveContainer for \"4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d\"" Dec 13 02:24:44.026301 env[1728]: time="2024-12-13T02:24:44.026190490Z" level=info msg="RemoveContainer for \"4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d\" returns successfully" Dec 13 02:24:44.085975 kubelet[2563]: E1213 02:24:44.085939 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" containerName="mount-cgroup" Dec 13 02:24:44.086189 kubelet[2563]: I1213 02:24:44.086013 2563 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" containerName="mount-cgroup" Dec 13 02:24:44.093199 systemd[1]: Created slice kubepods-burstable-pod5bb04ba3_7cf2_4038_b436_a63a9cf5d03a.slice. Dec 13 02:24:44.134292 kubelet[2563]: I1213 02:24:44.134191 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-hubble-tls\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134583 kubelet[2563]: I1213 02:24:44.134547 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-xtables-lock\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134666 kubelet[2563]: I1213 02:24:44.134587 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-cilium-config-path\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134666 kubelet[2563]: I1213 02:24:44.134612 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-cilium-ipsec-secrets\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134666 kubelet[2563]: I1213 02:24:44.134637 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gp5\" (UniqueName: \"kubernetes.io/projected/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-kube-api-access-z2gp5\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134666 kubelet[2563]: I1213 02:24:44.134663 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-hostproc\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134950 kubelet[2563]: I1213 02:24:44.134685 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-host-proc-sys-net\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134950 kubelet[2563]: I1213 02:24:44.134715 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-cilium-run\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134950 kubelet[2563]: I1213 02:24:44.134739 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-clustermesh-secrets\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134950 kubelet[2563]: I1213 02:24:44.134834 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-cilium-cgroup\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134950 kubelet[2563]: I1213 02:24:44.134856 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-lib-modules\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134950 kubelet[2563]: I1213 02:24:44.134885 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-etc-cni-netd\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134950 kubelet[2563]: I1213 02:24:44.134910 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-host-proc-sys-kernel\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.134950 kubelet[2563]: I1213 02:24:44.134940 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-bpf-maps\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.135352 kubelet[2563]: I1213 02:24:44.134962 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5bb04ba3-7cf2-4038-b436-a63a9cf5d03a-cni-path\") pod \"cilium-d7rjp\" (UID: \"5bb04ba3-7cf2-4038-b436-a63a9cf5d03a\") " pod="kube-system/cilium-d7rjp" Dec 13 02:24:44.404189 env[1728]: time="2024-12-13T02:24:44.400877603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7rjp,Uid:5bb04ba3-7cf2-4038-b436-a63a9cf5d03a,Namespace:kube-system,Attempt:0,}" Dec 13 02:24:44.430478 env[1728]: time="2024-12-13T02:24:44.430025083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:24:44.430478 env[1728]: time="2024-12-13T02:24:44.430083298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:24:44.430478 env[1728]: time="2024-12-13T02:24:44.430103220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:24:44.430809 env[1728]: time="2024-12-13T02:24:44.430590133Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a pid=4538 runtime=io.containerd.runc.v2 Dec 13 02:24:44.462216 systemd[1]: Started cri-containerd-89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a.scope. Dec 13 02:24:44.504083 env[1728]: time="2024-12-13T02:24:44.504040549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7rjp,Uid:5bb04ba3-7cf2-4038-b436-a63a9cf5d03a,Namespace:kube-system,Attempt:0,} returns sandbox id \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\"" Dec 13 02:24:44.512227 env[1728]: time="2024-12-13T02:24:44.512146441Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:24:44.547491 env[1728]: time="2024-12-13T02:24:44.547443507Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68\"" Dec 13 02:24:44.550364 env[1728]: time="2024-12-13T02:24:44.550037299Z" level=info msg="StartContainer for \"132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68\"" Dec 13 02:24:44.578844 systemd[1]: Started cri-containerd-132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68.scope. Dec 13 02:24:44.671656 env[1728]: time="2024-12-13T02:24:44.671598232Z" level=info msg="StartContainer for \"132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68\" returns successfully" Dec 13 02:24:44.695531 systemd[1]: cri-containerd-132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68.scope: Deactivated successfully. Dec 13 02:24:44.733271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68-rootfs.mount: Deactivated successfully. Dec 13 02:24:44.768272 env[1728]: time="2024-12-13T02:24:44.768215892Z" level=info msg="shim disconnected" id=132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68 Dec 13 02:24:44.768272 env[1728]: time="2024-12-13T02:24:44.768264693Z" level=warning msg="cleaning up after shim disconnected" id=132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68 namespace=k8s.io Dec 13 02:24:44.768272 env[1728]: time="2024-12-13T02:24:44.768277608Z" level=info msg="cleaning up dead shim" Dec 13 02:24:44.777339 env[1728]: time="2024-12-13T02:24:44.777294168Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4623 runtime=io.containerd.runc.v2\n" Dec 13 02:24:45.033696 env[1728]: time="2024-12-13T02:24:45.028924295Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:24:45.067378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197749084.mount: Deactivated successfully. Dec 13 02:24:45.080874 env[1728]: time="2024-12-13T02:24:45.080822614Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c4a40eb57abe77f8b6b518e96341c6f46753b7cd77c9aa703126b7eecc30d0dc\"" Dec 13 02:24:45.081767 env[1728]: time="2024-12-13T02:24:45.081728308Z" level=info msg="StartContainer for \"c4a40eb57abe77f8b6b518e96341c6f46753b7cd77c9aa703126b7eecc30d0dc\"" Dec 13 02:24:45.101658 systemd[1]: Started cri-containerd-c4a40eb57abe77f8b6b518e96341c6f46753b7cd77c9aa703126b7eecc30d0dc.scope. Dec 13 02:24:45.148008 env[1728]: time="2024-12-13T02:24:45.147961063Z" level=info msg="StartContainer for \"c4a40eb57abe77f8b6b518e96341c6f46753b7cd77c9aa703126b7eecc30d0dc\" returns successfully" Dec 13 02:24:45.165018 systemd[1]: cri-containerd-c4a40eb57abe77f8b6b518e96341c6f46753b7cd77c9aa703126b7eecc30d0dc.scope: Deactivated successfully. Dec 13 02:24:45.210879 env[1728]: time="2024-12-13T02:24:45.210827631Z" level=info msg="shim disconnected" id=c4a40eb57abe77f8b6b518e96341c6f46753b7cd77c9aa703126b7eecc30d0dc Dec 13 02:24:45.210879 env[1728]: time="2024-12-13T02:24:45.210873316Z" level=warning msg="cleaning up after shim disconnected" id=c4a40eb57abe77f8b6b518e96341c6f46753b7cd77c9aa703126b7eecc30d0dc namespace=k8s.io Dec 13 02:24:45.210879 env[1728]: time="2024-12-13T02:24:45.210885103Z" level=info msg="cleaning up dead shim" Dec 13 02:24:45.220565 env[1728]: time="2024-12-13T02:24:45.220500636Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4687 runtime=io.containerd.runc.v2\n" Dec 13 02:24:45.251302 kubelet[2563]: W1213 02:24:45.251231 2563 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0866bef_65f6_4a5a_8770_5d1ab3a9cb20.slice/cri-containerd-4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d.scope WatchSource:0}: container "4b5c10513233d8e79795c23e1c2725d1e3e2b13e9c75ed075c20860c6378ea8d" in namespace "k8s.io": not found Dec 13 02:24:45.534135 kubelet[2563]: I1213 02:24:45.534090 2563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0866bef-65f6-4a5a-8770-5d1ab3a9cb20" path="/var/lib/kubelet/pods/d0866bef-65f6-4a5a-8770-5d1ab3a9cb20/volumes" Dec 13 02:24:46.051017 env[1728]: time="2024-12-13T02:24:46.049124737Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:24:46.083014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846917132.mount: Deactivated successfully. Dec 13 02:24:46.096955 env[1728]: time="2024-12-13T02:24:46.096897388Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b\"" Dec 13 02:24:46.097733 env[1728]: time="2024-12-13T02:24:46.097678742Z" level=info msg="StartContainer for \"b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b\"" Dec 13 02:24:46.131592 systemd[1]: Started cri-containerd-b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b.scope. Dec 13 02:24:46.183483 env[1728]: time="2024-12-13T02:24:46.183432901Z" level=info msg="StartContainer for \"b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b\" returns successfully" Dec 13 02:24:46.194459 systemd[1]: cri-containerd-b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b.scope: Deactivated successfully. Dec 13 02:24:46.259612 env[1728]: time="2024-12-13T02:24:46.259553974Z" level=info msg="shim disconnected" id=b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b Dec 13 02:24:46.259612 env[1728]: time="2024-12-13T02:24:46.259602556Z" level=warning msg="cleaning up after shim disconnected" id=b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b namespace=k8s.io Dec 13 02:24:46.259612 env[1728]: time="2024-12-13T02:24:46.259616300Z" level=info msg="cleaning up dead shim" Dec 13 02:24:46.273725 env[1728]: time="2024-12-13T02:24:46.273676434Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4744 runtime=io.containerd.runc.v2\n" Dec 13 02:24:46.636668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b-rootfs.mount: Deactivated successfully. Dec 13 02:24:47.060487 env[1728]: time="2024-12-13T02:24:47.060166707Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:24:47.088923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959486727.mount: Deactivated successfully. Dec 13 02:24:47.107341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241669053.mount: Deactivated successfully. Dec 13 02:24:47.109396 env[1728]: time="2024-12-13T02:24:47.109340365Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ff33d4a108d7546e6edf98bb7173d7d6712f10a2d7e363be31b2f454708f2bdc\"" Dec 13 02:24:47.110709 env[1728]: time="2024-12-13T02:24:47.110671147Z" level=info msg="StartContainer for \"ff33d4a108d7546e6edf98bb7173d7d6712f10a2d7e363be31b2f454708f2bdc\"" Dec 13 02:24:47.156493 systemd[1]: Started cri-containerd-ff33d4a108d7546e6edf98bb7173d7d6712f10a2d7e363be31b2f454708f2bdc.scope. Dec 13 02:24:47.209815 systemd[1]: cri-containerd-ff33d4a108d7546e6edf98bb7173d7d6712f10a2d7e363be31b2f454708f2bdc.scope: Deactivated successfully. Dec 13 02:24:47.212244 env[1728]: time="2024-12-13T02:24:47.212202316Z" level=info msg="StartContainer for \"ff33d4a108d7546e6edf98bb7173d7d6712f10a2d7e363be31b2f454708f2bdc\" returns successfully" Dec 13 02:24:47.246253 env[1728]: time="2024-12-13T02:24:47.246197139Z" level=info msg="shim disconnected" id=ff33d4a108d7546e6edf98bb7173d7d6712f10a2d7e363be31b2f454708f2bdc Dec 13 02:24:47.246253 env[1728]: time="2024-12-13T02:24:47.246251444Z" level=warning msg="cleaning up after shim disconnected" id=ff33d4a108d7546e6edf98bb7173d7d6712f10a2d7e363be31b2f454708f2bdc namespace=k8s.io Dec 13 02:24:47.246683 env[1728]: time="2024-12-13T02:24:47.246263590Z" level=info msg="cleaning up dead shim" Dec 13 02:24:47.259748 env[1728]: time="2024-12-13T02:24:47.259689176Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4801 runtime=io.containerd.runc.v2\n" Dec 13 02:24:47.480333 env[1728]: time="2024-12-13T02:24:47.480295023Z" level=info msg="StopPodSandbox for \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\"" Dec 13 02:24:47.480846 env[1728]: time="2024-12-13T02:24:47.480783971Z" level=info msg="TearDown network for sandbox \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\" successfully" Dec 13 02:24:47.480846 env[1728]: time="2024-12-13T02:24:47.480840683Z" level=info msg="StopPodSandbox for \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\" returns successfully" Dec 13 02:24:47.481300 env[1728]: time="2024-12-13T02:24:47.481222273Z" level=info msg="RemovePodSandbox for \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\"" Dec 13 02:24:47.481390 env[1728]: time="2024-12-13T02:24:47.481292590Z" level=info msg="Forcibly stopping sandbox \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\"" Dec 13 02:24:47.481646 env[1728]: time="2024-12-13T02:24:47.481614432Z" level=info msg="TearDown network for sandbox \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\" successfully" Dec 13 02:24:47.493351 env[1728]: time="2024-12-13T02:24:47.493295100Z" level=info msg="RemovePodSandbox \"444454ffb777162b9acbc9119f90aba6b12c122e558adc9c553f018644d0fbc4\" returns successfully" Dec 13 02:24:47.494212 env[1728]: time="2024-12-13T02:24:47.494174980Z" level=info msg="StopPodSandbox for \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\"" Dec 13 02:24:47.494408 env[1728]: time="2024-12-13T02:24:47.494283500Z" level=info msg="TearDown network for sandbox \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\" successfully" Dec 13 02:24:47.494408 env[1728]: time="2024-12-13T02:24:47.494330366Z" level=info msg="StopPodSandbox for \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\" returns successfully" Dec 13 02:24:47.495443 env[1728]: time="2024-12-13T02:24:47.495262546Z" level=info msg="RemovePodSandbox for \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\"" Dec 13 02:24:47.495706 env[1728]: time="2024-12-13T02:24:47.495446974Z" level=info msg="Forcibly stopping sandbox \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\"" Dec 13 02:24:47.495791 env[1728]: time="2024-12-13T02:24:47.495712220Z" level=info msg="TearDown network for sandbox \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\" successfully" Dec 13 02:24:47.502199 env[1728]: time="2024-12-13T02:24:47.502150022Z" level=info msg="RemovePodSandbox \"c35c37f5beea455e812412aec0f5c532afe4ef2d3049d2fb9c8e2516cb8135aa\" returns successfully" Dec 13 02:24:47.504034 env[1728]: time="2024-12-13T02:24:47.502966055Z" level=info msg="StopPodSandbox for \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\"" Dec 13 02:24:47.504034 env[1728]: time="2024-12-13T02:24:47.503113241Z" level=info msg="TearDown network for sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" successfully" Dec 13 02:24:47.504034 env[1728]: time="2024-12-13T02:24:47.503271283Z" level=info msg="StopPodSandbox for \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" returns successfully" Dec 13 02:24:47.504034 env[1728]: time="2024-12-13T02:24:47.503718327Z" level=info msg="RemovePodSandbox for \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\"" Dec 13 02:24:47.504034 env[1728]: time="2024-12-13T02:24:47.503770715Z" level=info msg="Forcibly stopping sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\"" Dec 13 02:24:47.504034 env[1728]: time="2024-12-13T02:24:47.503972966Z" level=info msg="TearDown network for sandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" successfully" Dec 13 02:24:47.509724 env[1728]: time="2024-12-13T02:24:47.509678518Z" level=info msg="RemovePodSandbox \"17e7e6d9ae26552548637f0bba8ce6124aa35bb6220bb294a090002535f43034\" returns successfully" Dec 13 02:24:47.723760 kubelet[2563]: E1213 02:24:47.723709 2563 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:24:48.074715 env[1728]: time="2024-12-13T02:24:48.074661655Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:24:48.132545 env[1728]: time="2024-12-13T02:24:48.131755664Z" level=info msg="CreateContainer within sandbox \"89878bcd97f16ff1a25516991d3c47e20d9fef9b6254757c5b7f0bca9ca1267a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66\"" Dec 13 02:24:48.133737 env[1728]: time="2024-12-13T02:24:48.133701093Z" level=info msg="StartContainer for \"8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66\"" Dec 13 02:24:48.201617 systemd[1]: Started cri-containerd-8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66.scope. Dec 13 02:24:48.254563 env[1728]: time="2024-12-13T02:24:48.254197145Z" level=info msg="StartContainer for \"8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66\" returns successfully" Dec 13 02:24:48.362810 kubelet[2563]: W1213 02:24:48.362682 2563 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bb04ba3_7cf2_4038_b436_a63a9cf5d03a.slice/cri-containerd-132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68.scope WatchSource:0}: task 132619327df7e1f1d30872cdaea3e549f937f4d7bbaadd3980aeb3aa6d715f68 not found: not found Dec 13 02:24:48.637208 systemd[1]: run-containerd-runc-k8s.io-8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66-runc.9MsSLw.mount: Deactivated successfully. Dec 13 02:24:49.102128 kubelet[2563]: I1213 02:24:49.100096 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d7rjp" podStartSLOduration=5.100057186 podStartE2EDuration="5.100057186s" podCreationTimestamp="2024-12-13 02:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:24:49.098195672 +0000 UTC m=+121.902569467" watchObservedRunningTime="2024-12-13 02:24:49.100057186 +0000 UTC m=+121.904430999" Dec 13 02:24:49.111558 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:24:50.308572 kubelet[2563]: I1213 02:24:50.307504 2563 setters.go:600] "Node became not ready" node="ip-172-31-16-161" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:24:50Z","lastTransitionTime":"2024-12-13T02:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:24:50.531850 kubelet[2563]: E1213 02:24:50.531798 2563 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-9s425" podUID="d5158bea-ca55-43de-9fbe-d99281ed0280" Dec 13 02:24:50.942351 systemd[1]: run-containerd-runc-k8s.io-8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66-runc.KYMEkH.mount: Deactivated successfully. Dec 13 02:24:51.477644 kubelet[2563]: W1213 02:24:51.477530 2563 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bb04ba3_7cf2_4038_b436_a63a9cf5d03a.slice/cri-containerd-c4a40eb57abe77f8b6b518e96341c6f46753b7cd77c9aa703126b7eecc30d0dc.scope WatchSource:0}: task c4a40eb57abe77f8b6b518e96341c6f46753b7cd77c9aa703126b7eecc30d0dc not found: not found Dec 13 02:24:52.254274 systemd-networkd[1462]: lxc_health: Link UP Dec 13 02:24:52.258199 (udev-worker)[5368]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:24:52.265671 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:24:52.265158 systemd-networkd[1462]: lxc_health: Gained carrier Dec 13 02:24:52.533030 kubelet[2563]: E1213 02:24:52.532846 2563 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-9s425" podUID="d5158bea-ca55-43de-9fbe-d99281ed0280" Dec 13 02:24:52.533736 kubelet[2563]: E1213 02:24:52.532941 2563 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-85pr4" podUID="c453a818-57e2-4827-88f5-cfb93c3fec40" Dec 13 02:24:53.736048 systemd[1]: run-containerd-runc-k8s.io-8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66-runc.VGlZu4.mount: Deactivated successfully. Dec 13 02:24:53.979577 systemd-networkd[1462]: lxc_health: Gained IPv6LL Dec 13 02:24:54.604758 kubelet[2563]: W1213 02:24:54.604705 2563 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bb04ba3_7cf2_4038_b436_a63a9cf5d03a.slice/cri-containerd-b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b.scope WatchSource:0}: task b94f95036ab2119e75b7fb8310389d8e0a3673342befe14865c980dea540346b not found: not found Dec 13 02:24:56.226878 systemd[1]: run-containerd-runc-k8s.io-8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66-runc.kD3VMT.mount: Deactivated successfully. Dec 13 02:24:57.720079 kubelet[2563]: W1213 02:24:57.720023 2563 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bb04ba3_7cf2_4038_b436_a63a9cf5d03a.slice/cri-containerd-ff33d4a108d7546e6edf98bb7173d7d6712f10a2d7e363be31b2f454708f2bdc.scope WatchSource:0}: task ff33d4a108d7546e6edf98bb7173d7d6712f10a2d7e363be31b2f454708f2bdc not found: not found Dec 13 02:24:58.553250 systemd[1]: run-containerd-runc-k8s.io-8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66-runc.iwDurk.mount: Deactivated successfully. Dec 13 02:25:00.816095 systemd[1]: run-containerd-runc-k8s.io-8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66-runc.y2Uixp.mount: Deactivated successfully. Dec 13 02:25:03.022778 systemd[1]: run-containerd-runc-k8s.io-8db54fd0dbc26c031b24f2154634c693aa86c494d18273dca5f0adee2599db66-runc.KpmEvC.mount: Deactivated successfully. Dec 13 02:25:03.527102 sshd[4481]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:03.545322 systemd[1]: sshd@25-172.31.16.161:22-139.178.68.195:46942.service: Deactivated successfully. Dec 13 02:25:03.546594 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:25:03.547747 systemd-logind[1722]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:25:03.548989 systemd-logind[1722]: Removed session 26.