Dec 13 14:33:10.225389 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:33:10.225424 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:33:10.225441 kernel: BIOS-provided physical RAM map: Dec 13 14:33:10.225452 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:33:10.225463 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:33:10.225474 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:33:10.225490 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 14:33:10.225502 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 14:33:10.225514 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 14:33:10.225525 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:33:10.225537 kernel: NX (Execute Disable) protection: active Dec 13 14:33:10.225549 kernel: SMBIOS 2.7 present. Dec 13 14:33:10.225577 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 14:33:10.225648 kernel: Hypervisor detected: KVM Dec 13 14:33:10.225667 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:33:10.225681 kernel: kvm-clock: cpu 0, msr 7019a001, primary cpu clock Dec 13 14:33:10.225694 kernel: kvm-clock: using sched offset of 9070817713 cycles Dec 13 14:33:10.225708 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:33:10.225721 kernel: tsc: Detected 2500.004 MHz processor Dec 13 14:33:10.225734 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:33:10.225750 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:33:10.225763 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 14:33:10.225776 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:33:10.225789 kernel: Using GB pages for direct mapping Dec 13 14:33:10.225802 kernel: ACPI: Early table checksum verification disabled Dec 13 14:33:10.225815 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 14:33:10.225828 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 14:33:10.225841 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:33:10.225854 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 14:33:10.225870 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 14:33:10.225882 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:33:10.225895 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:33:10.225909 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 14:33:10.225921 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:33:10.225934 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 14:33:10.225947 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 14:33:10.225960 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:33:10.225976 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 14:33:10.225988 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 14:33:10.226001 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 14:33:10.226020 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 14:33:10.226033 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 14:33:10.226047 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 14:33:10.226061 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 14:33:10.226077 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 14:33:10.226091 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 14:33:10.226104 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 14:33:10.226118 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:33:10.226132 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:33:10.226146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 14:33:10.226160 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 14:33:10.226173 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 14:33:10.226189 kernel: Zone ranges: Dec 13 14:33:10.226203 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:33:10.226217 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 14:33:10.226231 kernel: Normal empty Dec 13 14:33:10.226244 kernel: Movable zone start for each node Dec 13 14:33:10.226258 kernel: Early memory node ranges Dec 13 14:33:10.226272 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:33:10.226285 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 14:33:10.226299 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 14:33:10.226315 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:33:10.226326 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:33:10.226340 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 14:33:10.226354 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:33:10.226368 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:33:10.226382 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 14:33:10.226395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:33:10.226409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:33:10.226423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:33:10.226440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:33:10.226454 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:33:10.226467 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:33:10.226481 kernel: TSC deadline timer available Dec 13 14:33:10.226495 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:33:10.226605 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 14:33:10.226620 kernel: Booting paravirtualized kernel on KVM Dec 13 14:33:10.226633 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:33:10.226648 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:33:10.226665 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:33:10.226679 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:33:10.226796 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:33:10.226811 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 14:33:10.226855 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:33:10.226869 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:33:10.226883 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 14:33:10.226898 kernel: Policy zone: DMA32 Dec 13 14:33:10.226915 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:33:10.226934 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:33:10.226948 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:33:10.226962 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:33:10.226977 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:33:10.226992 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123080K reserved, 0K cma-reserved) Dec 13 14:33:10.227006 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:33:10.227019 kernel: Kernel/User page tables isolation: enabled Dec 13 14:33:10.227032 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:33:10.227049 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:33:10.227063 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:33:10.227079 kernel: rcu: RCU event tracing is enabled. Dec 13 14:33:10.227093 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:33:10.227107 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:33:10.227121 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:33:10.227135 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:33:10.227150 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:33:10.227164 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:33:10.227181 kernel: random: crng init done Dec 13 14:33:10.227195 kernel: Console: colour VGA+ 80x25 Dec 13 14:33:10.227209 kernel: printk: console [ttyS0] enabled Dec 13 14:33:10.227223 kernel: ACPI: Core revision 20210730 Dec 13 14:33:10.227237 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 14:33:10.227257 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:33:10.227270 kernel: x2apic enabled Dec 13 14:33:10.227284 kernel: Switched APIC routing to physical x2apic. Dec 13 14:33:10.227298 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Dec 13 14:33:10.227316 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Dec 13 14:33:10.227330 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:33:10.227344 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:33:10.227359 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:33:10.227384 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:33:10.227401 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:33:10.227416 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:33:10.227431 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:33:10.227446 kernel: RETBleed: Vulnerable Dec 13 14:33:10.227461 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:33:10.227475 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:33:10.227490 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:33:10.227504 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:33:10.227519 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:33:10.227537 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:33:10.227553 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:33:10.227582 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:33:10.227594 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:33:10.227608 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:33:10.227625 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:33:10.227638 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:33:10.227653 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 14:33:10.227666 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:33:10.227681 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:33:10.227695 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:33:10.227709 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 14:33:10.227723 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 14:33:10.227737 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 14:33:10.227751 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 14:33:10.227766 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 14:33:10.227780 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:33:10.227797 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:33:10.227811 kernel: LSM: Security Framework initializing Dec 13 14:33:10.227825 kernel: SELinux: Initializing. Dec 13 14:33:10.227839 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:33:10.227854 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:33:10.227868 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:33:10.227883 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:33:10.227898 kernel: signal: max sigframe size: 3632 Dec 13 14:33:10.227913 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:33:10.227925 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:33:10.227943 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:33:10.227957 kernel: x86: Booting SMP configuration: Dec 13 14:33:10.227972 kernel: .... node #0, CPUs: #1 Dec 13 14:33:10.227986 kernel: kvm-clock: cpu 1, msr 7019a041, secondary cpu clock Dec 13 14:33:10.228000 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 14:33:10.228015 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:33:10.228031 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:33:10.228045 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:33:10.228060 kernel: smpboot: Max logical packages: 1 Dec 13 14:33:10.228077 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Dec 13 14:33:10.228090 kernel: devtmpfs: initialized Dec 13 14:33:10.228175 kernel: x86/mm: Memory block size: 128MB Dec 13 14:33:10.228189 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:33:10.228204 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:33:10.228219 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:33:10.228233 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:33:10.228248 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:33:10.228263 kernel: audit: type=2000 audit(1734100389.857:1): state=initialized audit_enabled=0 res=1 Dec 13 14:33:10.228280 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:33:10.228295 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:33:10.228308 kernel: cpuidle: using governor menu Dec 13 14:33:10.228321 kernel: ACPI: bus type PCI registered Dec 13 14:33:10.228335 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:33:10.228350 kernel: dca service started, version 1.12.1 Dec 13 14:33:10.228364 kernel: PCI: Using configuration type 1 for base access Dec 13 14:33:10.228379 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:33:10.228395 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:33:10.228412 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:33:10.228426 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:33:10.228441 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:33:10.228456 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:33:10.228471 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:33:10.228485 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:33:10.228500 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:33:10.228515 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:33:10.228539 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:33:10.228667 kernel: ACPI: Interpreter enabled Dec 13 14:33:10.228680 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:33:10.228691 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:33:10.228703 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:33:10.228714 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:33:10.228726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:33:10.228933 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:33:10.229057 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:33:10.229088 kernel: acpiphp: Slot [3] registered Dec 13 14:33:10.229102 kernel: acpiphp: Slot [4] registered Dec 13 14:33:10.229115 kernel: acpiphp: Slot [5] registered Dec 13 14:33:10.229128 kernel: acpiphp: Slot [6] registered Dec 13 14:33:10.229143 kernel: acpiphp: Slot [7] registered Dec 13 14:33:10.229155 kernel: acpiphp: Slot [8] registered Dec 13 14:33:10.229168 kernel: acpiphp: Slot [9] registered Dec 13 14:33:10.229184 kernel: acpiphp: Slot [10] registered Dec 13 14:33:10.229197 kernel: acpiphp: Slot [11] registered Dec 13 14:33:10.229215 kernel: acpiphp: Slot [12] registered Dec 13 14:33:10.229228 kernel: acpiphp: Slot [13] registered Dec 13 14:33:10.229242 kernel: acpiphp: Slot [14] registered Dec 13 14:33:10.229257 kernel: acpiphp: Slot [15] registered Dec 13 14:33:10.229270 kernel: acpiphp: Slot [16] registered Dec 13 14:33:10.229285 kernel: acpiphp: Slot [17] registered Dec 13 14:33:10.229298 kernel: acpiphp: Slot [18] registered Dec 13 14:33:10.229310 kernel: acpiphp: Slot [19] registered Dec 13 14:33:10.229322 kernel: acpiphp: Slot [20] registered Dec 13 14:33:10.229338 kernel: acpiphp: Slot [21] registered Dec 13 14:33:10.229351 kernel: acpiphp: Slot [22] registered Dec 13 14:33:10.229364 kernel: acpiphp: Slot [23] registered Dec 13 14:33:10.229376 kernel: acpiphp: Slot [24] registered Dec 13 14:33:10.229388 kernel: acpiphp: Slot [25] registered Dec 13 14:33:10.229400 kernel: acpiphp: Slot [26] registered Dec 13 14:33:10.229413 kernel: acpiphp: Slot [27] registered Dec 13 14:33:10.229425 kernel: acpiphp: Slot [28] registered Dec 13 14:33:10.229437 kernel: acpiphp: Slot [29] registered Dec 13 14:33:10.229449 kernel: acpiphp: Slot [30] registered Dec 13 14:33:10.229464 kernel: acpiphp: Slot [31] registered Dec 13 14:33:10.229476 kernel: PCI host bridge to bus 0000:00 Dec 13 14:33:10.229738 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:33:10.229869 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:33:10.229982 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:33:10.230088 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:33:10.230192 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:33:10.230324 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:33:10.230449 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:33:10.230657 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 14:33:10.230775 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:33:10.230890 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 14:33:10.231080 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 14:33:10.231204 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 14:33:10.231339 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 14:33:10.231469 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 14:33:10.231610 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 14:33:10.231736 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 14:33:10.231862 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 11718 usecs Dec 13 14:33:10.232208 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 14:33:10.232347 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 14:33:10.232481 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 14:33:10.232839 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:33:10.232991 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:33:10.233121 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 14:33:10.233331 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:33:10.233465 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 14:33:10.233490 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:33:10.233507 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:33:10.233521 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:33:10.233536 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:33:10.233551 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:33:10.233642 kernel: iommu: Default domain type: Translated Dec 13 14:33:10.233656 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:33:10.233792 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 14:33:10.233917 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:33:10.234050 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 14:33:10.234070 kernel: vgaarb: loaded Dec 13 14:33:10.234085 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:33:10.234101 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:33:10.234115 kernel: PTP clock support registered Dec 13 14:33:10.234130 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:33:10.234145 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:33:10.234161 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:33:10.234179 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 14:33:10.234193 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 14:33:10.234208 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 14:33:10.234223 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:33:10.234239 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:33:10.234254 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:33:10.234269 kernel: pnp: PnP ACPI init Dec 13 14:33:10.234284 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:33:10.234299 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:33:10.234317 kernel: NET: Registered PF_INET protocol family Dec 13 14:33:10.234331 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:33:10.234344 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:33:10.234359 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:33:10.234373 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:33:10.234389 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:33:10.234405 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:33:10.234420 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:33:10.234435 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:33:10.234453 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:33:10.234468 kernel: NET: Registered PF_XDP protocol family Dec 13 14:33:10.234625 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:33:10.235950 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:33:10.236089 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:33:10.236214 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:33:10.236361 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:33:10.240767 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:33:10.240908 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:33:10.240931 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:33:10.240947 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Dec 13 14:33:10.240963 kernel: clocksource: Switched to clocksource tsc Dec 13 14:33:10.240979 kernel: Initialise system trusted keyrings Dec 13 14:33:10.240994 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:33:10.241010 kernel: Key type asymmetric registered Dec 13 14:33:10.241025 kernel: Asymmetric key parser 'x509' registered Dec 13 14:33:10.241044 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:33:10.241059 kernel: io scheduler mq-deadline registered Dec 13 14:33:10.241075 kernel: io scheduler kyber registered Dec 13 14:33:10.241089 kernel: io scheduler bfq registered Dec 13 14:33:10.241105 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:33:10.241121 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:33:10.241135 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:33:10.241151 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:33:10.241166 kernel: i8042: Warning: Keylock active Dec 13 14:33:10.241183 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:33:10.241198 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:33:10.241369 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:33:10.241504 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:33:10.242660 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:33:09 UTC (1734100389) Dec 13 14:33:10.243194 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:33:10.243224 kernel: intel_pstate: CPU model not supported Dec 13 14:33:10.243240 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:33:10.243265 kernel: Segment Routing with IPv6 Dec 13 14:33:10.243279 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:33:10.243292 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:33:10.243305 kernel: Key type dns_resolver registered Dec 13 14:33:10.243319 kernel: IPI shorthand broadcast: enabled Dec 13 14:33:10.243333 kernel: sched_clock: Marking stable (585444896, 324104448)->(1039133967, -129584623) Dec 13 14:33:10.243348 kernel: registered taskstats version 1 Dec 13 14:33:10.243362 kernel: Loading compiled-in X.509 certificates Dec 13 14:33:10.243375 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:33:10.243394 kernel: Key type .fscrypt registered Dec 13 14:33:10.243409 kernel: Key type fscrypt-provisioning registered Dec 13 14:33:10.243976 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:33:10.243996 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:33:10.244013 kernel: ima: No architecture policies found Dec 13 14:33:10.244029 kernel: clk: Disabling unused clocks Dec 13 14:33:10.244045 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:33:10.244061 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:33:10.244077 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:33:10.244097 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:33:10.244113 kernel: Run /init as init process Dec 13 14:33:10.244129 kernel: with arguments: Dec 13 14:33:10.244145 kernel: /init Dec 13 14:33:10.244160 kernel: with environment: Dec 13 14:33:10.244176 kernel: HOME=/ Dec 13 14:33:10.244191 kernel: TERM=linux Dec 13 14:33:10.244206 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:33:10.244227 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:33:10.244624 systemd[1]: Detected virtualization amazon. Dec 13 14:33:10.244639 systemd[1]: Detected architecture x86-64. Dec 13 14:33:10.244653 systemd[1]: Running in initrd. Dec 13 14:33:10.244681 systemd[1]: No hostname configured, using default hostname. Dec 13 14:33:10.244698 systemd[1]: Hostname set to . Dec 13 14:33:10.244717 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:33:10.244733 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:33:10.244749 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:33:10.244765 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:33:10.244779 systemd[1]: Reached target cryptsetup.target. Dec 13 14:33:10.244793 systemd[1]: Reached target paths.target. Dec 13 14:33:10.244808 systemd[1]: Reached target slices.target. Dec 13 14:33:10.244825 systemd[1]: Reached target swap.target. Dec 13 14:33:10.244842 systemd[1]: Reached target timers.target. Dec 13 14:33:10.244857 systemd[1]: Listening on iscsid.socket. Dec 13 14:33:10.244872 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:33:10.244887 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:33:10.244902 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:33:10.244916 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:33:10.244931 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:33:10.244946 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:33:10.244963 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:33:10.244978 systemd[1]: Reached target sockets.target. Dec 13 14:33:10.244994 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:33:10.245009 systemd[1]: Finished network-cleanup.service. Dec 13 14:33:10.245024 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:33:10.245039 systemd[1]: Starting systemd-journald.service... Dec 13 14:33:10.245055 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:33:10.245070 systemd[1]: Starting systemd-resolved.service... Dec 13 14:33:10.245084 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:33:10.245109 systemd-journald[185]: Journal started Dec 13 14:33:10.248644 systemd-journald[185]: Runtime Journal (/run/log/journal/ec21875dd8b32e7192076941f0bfa4aa) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:33:10.266264 systemd[1]: Started systemd-journald.service. Dec 13 14:33:10.294607 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 14:33:10.448366 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:33:10.448396 kernel: Bridge firewalling registered Dec 13 14:33:10.448414 kernel: SCSI subsystem initialized Dec 13 14:33:10.448434 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:33:10.448453 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:33:10.448470 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:33:10.448489 kernel: audit: type=1130 audit(1734100390.446:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.305671 systemd-resolved[187]: Positive Trust Anchors: Dec 13 14:33:10.305686 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:33:10.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.305736 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:33:10.309389 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 14:33:10.472868 kernel: audit: type=1130 audit(1734100390.453:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.473822 kernel: audit: type=1130 audit(1734100390.467:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.353395 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 14:33:10.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.396080 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 14:33:10.481710 kernel: audit: type=1130 audit(1734100390.473:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.447574 systemd[1]: Started systemd-resolved.service. Dec 13 14:33:10.454045 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:33:10.507237 kernel: audit: type=1130 audit(1734100390.481:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.507285 kernel: audit: type=1130 audit(1734100390.487:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.472825 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:33:10.474017 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:33:10.481957 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:33:10.487973 systemd[1]: Reached target nss-lookup.target. Dec 13 14:33:10.509147 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:33:10.516854 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:33:10.519162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:33:10.536742 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:33:10.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.543891 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:33:10.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.555578 kernel: audit: type=1130 audit(1734100390.537:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.555614 kernel: audit: type=1130 audit(1734100390.543:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.574885 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:33:10.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.583784 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:33:10.592654 kernel: audit: type=1130 audit(1734100390.578:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.603171 dracut-cmdline[206]: dracut-dracut-053 Dec 13 14:33:10.607186 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:33:10.764592 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:33:10.785585 kernel: iscsi: registered transport (tcp) Dec 13 14:33:10.813198 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:33:10.813277 kernel: QLogic iSCSI HBA Driver Dec 13 14:33:10.849102 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:33:10.850426 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:33:10.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.907620 kernel: raid6: avx512x4 gen() 16234 MB/s Dec 13 14:33:10.924620 kernel: raid6: avx512x4 xor() 7020 MB/s Dec 13 14:33:10.942617 kernel: raid6: avx512x2 gen() 16369 MB/s Dec 13 14:33:10.959622 kernel: raid6: avx512x2 xor() 19318 MB/s Dec 13 14:33:10.976635 kernel: raid6: avx512x1 gen() 13612 MB/s Dec 13 14:33:10.994610 kernel: raid6: avx512x1 xor() 14559 MB/s Dec 13 14:33:11.013609 kernel: raid6: avx2x4 gen() 6127 MB/s Dec 13 14:33:11.031616 kernel: raid6: avx2x4 xor() 3666 MB/s Dec 13 14:33:11.048618 kernel: raid6: avx2x2 gen() 14373 MB/s Dec 13 14:33:11.067614 kernel: raid6: avx2x2 xor() 10040 MB/s Dec 13 14:33:11.086616 kernel: raid6: avx2x1 gen() 9990 MB/s Dec 13 14:33:11.103616 kernel: raid6: avx2x1 xor() 7518 MB/s Dec 13 14:33:11.121626 kernel: raid6: sse2x4 gen() 7584 MB/s Dec 13 14:33:11.138614 kernel: raid6: sse2x4 xor() 4615 MB/s Dec 13 14:33:11.155617 kernel: raid6: sse2x2 gen() 7880 MB/s Dec 13 14:33:11.173613 kernel: raid6: sse2x2 xor() 3655 MB/s Dec 13 14:33:11.191617 kernel: raid6: sse2x1 gen() 6008 MB/s Dec 13 14:33:11.209750 kernel: raid6: sse2x1 xor() 4015 MB/s Dec 13 14:33:11.209833 kernel: raid6: using algorithm avx512x2 gen() 16369 MB/s Dec 13 14:33:11.209851 kernel: raid6: .... xor() 19318 MB/s, rmw enabled Dec 13 14:33:11.211140 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:33:11.231654 kernel: xor: automatically using best checksumming function avx Dec 13 14:33:11.370973 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:33:11.386580 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:33:11.389491 systemd[1]: Starting systemd-udevd.service... Dec 13 14:33:11.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:11.388000 audit: BPF prog-id=7 op=LOAD Dec 13 14:33:11.388000 audit: BPF prog-id=8 op=LOAD Dec 13 14:33:11.407784 systemd-udevd[383]: Using default interface naming scheme 'v252'. Dec 13 14:33:11.414296 systemd[1]: Started systemd-udevd.service. Dec 13 14:33:11.417026 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:33:11.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:11.437469 dracut-pre-trigger[391]: rd.md=0: removing MD RAID activation Dec 13 14:33:11.477340 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:33:11.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:11.480984 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:33:11.544265 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:33:11.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:11.647616 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:33:11.647681 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:33:11.669518 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:33:11.669753 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 14:33:11.669899 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:22:5c:31:2b:13 Dec 13 14:33:11.670036 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:33:11.670055 kernel: AES CTR mode by8 optimization enabled Dec 13 14:33:11.662944 (udev-worker)[426]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:11.718474 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:33:11.718723 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:33:11.728582 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:33:11.737714 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:33:11.737771 kernel: GPT:9289727 != 16777215 Dec 13 14:33:11.737790 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:33:11.737816 kernel: GPT:9289727 != 16777215 Dec 13 14:33:11.737832 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:33:11.737847 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:33:11.827585 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (434) Dec 13 14:33:11.930637 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:33:11.973425 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:33:11.978034 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:33:11.991707 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:33:12.004292 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:33:12.007223 systemd[1]: Starting disk-uuid.service... Dec 13 14:33:12.017708 disk-uuid[585]: Primary Header is updated. Dec 13 14:33:12.017708 disk-uuid[585]: Secondary Entries is updated. Dec 13 14:33:12.017708 disk-uuid[585]: Secondary Header is updated. Dec 13 14:33:12.028585 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:33:12.044599 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:33:13.041749 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:33:13.043956 disk-uuid[586]: The operation has completed successfully. Dec 13 14:33:13.234024 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:33:13.234138 systemd[1]: Finished disk-uuid.service. Dec 13 14:33:13.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.294830 systemd[1]: Starting verity-setup.service... Dec 13 14:33:13.327587 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:33:13.450398 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:33:13.461525 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:33:13.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.465319 systemd[1]: Finished verity-setup.service. Dec 13 14:33:13.623223 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:33:13.626018 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:33:13.626408 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:33:13.627439 systemd[1]: Starting ignition-setup.service... Dec 13 14:33:13.637841 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:33:13.669383 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:33:13.669447 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:33:13.669465 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:33:13.714641 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:33:13.753814 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:33:13.793232 systemd[1]: Finished ignition-setup.service. Dec 13 14:33:13.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.800385 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:33:13.808195 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:33:13.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.810000 audit: BPF prog-id=9 op=LOAD Dec 13 14:33:13.812196 systemd[1]: Starting systemd-networkd.service... Dec 13 14:33:13.840643 systemd-networkd[1014]: lo: Link UP Dec 13 14:33:13.840655 systemd-networkd[1014]: lo: Gained carrier Dec 13 14:33:13.843674 systemd-networkd[1014]: Enumeration completed Dec 13 14:33:13.844003 systemd-networkd[1014]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:33:13.844680 systemd[1]: Started systemd-networkd.service. Dec 13 14:33:13.851955 systemd[1]: Reached target network.target. Dec 13 14:33:13.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.856682 systemd[1]: Starting iscsiuio.service... Dec 13 14:33:13.857590 systemd-networkd[1014]: eth0: Link UP Dec 13 14:33:13.857596 systemd-networkd[1014]: eth0: Gained carrier Dec 13 14:33:13.876523 systemd[1]: Started iscsiuio.service. Dec 13 14:33:13.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.878766 systemd-networkd[1014]: eth0: DHCPv4 address 172.31.18.151/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:33:13.880426 systemd[1]: Starting iscsid.service... Dec 13 14:33:13.890119 iscsid[1019]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:33:13.890119 iscsid[1019]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:33:13.890119 iscsid[1019]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:33:13.890119 iscsid[1019]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:33:13.890119 iscsid[1019]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:33:13.890119 iscsid[1019]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:33:13.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.890194 systemd[1]: Started iscsid.service. Dec 13 14:33:13.893861 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:33:13.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.921248 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:33:13.922688 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:33:13.926352 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:33:13.927522 systemd[1]: Reached target remote-fs.target. Dec 13 14:33:13.929686 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:33:13.953046 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:33:13.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.135452 ignition[1011]: Ignition 2.14.0 Dec 13 14:33:14.135466 ignition[1011]: Stage: fetch-offline Dec 13 14:33:14.135623 ignition[1011]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:14.135664 ignition[1011]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:14.173454 ignition[1011]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:14.174304 ignition[1011]: Ignition finished successfully Dec 13 14:33:14.177312 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:33:14.189883 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 14:33:14.189962 kernel: audit: type=1130 audit(1734100394.179:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.181273 systemd[1]: Starting ignition-fetch.service... Dec 13 14:33:14.201799 ignition[1038]: Ignition 2.14.0 Dec 13 14:33:14.201813 ignition[1038]: Stage: fetch Dec 13 14:33:14.202036 ignition[1038]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:14.202112 ignition[1038]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:14.219249 ignition[1038]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:14.221625 ignition[1038]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:14.281014 ignition[1038]: INFO : PUT result: OK Dec 13 14:33:14.296841 ignition[1038]: DEBUG : parsed url from cmdline: "" Dec 13 14:33:14.296841 ignition[1038]: INFO : no config URL provided Dec 13 14:33:14.296841 ignition[1038]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:33:14.302311 ignition[1038]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:33:14.302311 ignition[1038]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:14.308036 ignition[1038]: INFO : PUT result: OK Dec 13 14:33:14.308036 ignition[1038]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:33:14.311323 ignition[1038]: INFO : GET result: OK Dec 13 14:33:14.311323 ignition[1038]: DEBUG : parsing config with SHA512: 7c9a6dd261282c9ad2aff49b4cb0c6b73966809e5db1aefde7f72a9c5b3bb6546bf0fcf2f8176b74a69b525124eca6ed0e66dfd7e3f824202862b1375418a3d5 Dec 13 14:33:14.319920 unknown[1038]: fetched base config from "system" Dec 13 14:33:14.319932 unknown[1038]: fetched base config from "system" Dec 13 14:33:14.319939 unknown[1038]: fetched user config from "aws" Dec 13 14:33:14.321810 ignition[1038]: fetch: fetch complete Dec 13 14:33:14.321816 ignition[1038]: fetch: fetch passed Dec 13 14:33:14.321871 ignition[1038]: Ignition finished successfully Dec 13 14:33:14.338654 kernel: audit: type=1130 audit(1734100394.332:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.330246 systemd[1]: Finished ignition-fetch.service. Dec 13 14:33:14.335784 systemd[1]: Starting ignition-kargs.service... Dec 13 14:33:14.367392 ignition[1044]: Ignition 2.14.0 Dec 13 14:33:14.367407 ignition[1044]: Stage: kargs Dec 13 14:33:14.367847 ignition[1044]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:14.367880 ignition[1044]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:14.377761 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:14.379550 ignition[1044]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:14.381124 ignition[1044]: INFO : PUT result: OK Dec 13 14:33:14.384806 ignition[1044]: kargs: kargs passed Dec 13 14:33:14.384870 ignition[1044]: Ignition finished successfully Dec 13 14:33:14.386947 systemd[1]: Finished ignition-kargs.service. Dec 13 14:33:14.389291 systemd[1]: Starting ignition-disks.service... Dec 13 14:33:14.397693 kernel: audit: type=1130 audit(1734100394.387:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.403721 ignition[1050]: Ignition 2.14.0 Dec 13 14:33:14.403735 ignition[1050]: Stage: disks Dec 13 14:33:14.403936 ignition[1050]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:14.403970 ignition[1050]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:14.420390 ignition[1050]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:14.423118 ignition[1050]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:14.428182 ignition[1050]: INFO : PUT result: OK Dec 13 14:33:14.444873 ignition[1050]: disks: disks passed Dec 13 14:33:14.445096 ignition[1050]: Ignition finished successfully Dec 13 14:33:14.452459 systemd[1]: Finished ignition-disks.service. Dec 13 14:33:14.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.455157 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:33:14.475746 kernel: audit: type=1130 audit(1734100394.454:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.475762 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:33:14.478682 systemd[1]: Reached target local-fs.target. Dec 13 14:33:14.484659 systemd[1]: Reached target sysinit.target. Dec 13 14:33:14.491765 systemd[1]: Reached target basic.target. Dec 13 14:33:14.495624 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:33:14.527520 systemd-fsck[1058]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:33:14.533532 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:33:14.546017 kernel: audit: type=1130 audit(1734100394.534:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.546154 systemd[1]: Mounting sysroot.mount... Dec 13 14:33:14.572626 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:33:14.574316 systemd[1]: Mounted sysroot.mount. Dec 13 14:33:14.579460 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:33:14.583872 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:33:14.587782 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:33:14.589482 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:33:14.591886 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:33:14.595770 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:33:14.607073 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:33:14.615008 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:33:14.627166 initrd-setup-root[1080]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:33:14.643296 initrd-setup-root[1088]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:33:14.651985 initrd-setup-root[1096]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:33:14.658636 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1075) Dec 13 14:33:14.666174 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:33:14.666263 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:33:14.666284 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:33:14.666301 initrd-setup-root[1104]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:33:14.679703 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:33:14.682630 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:33:14.783081 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:33:14.793150 kernel: audit: type=1130 audit(1734100394.784:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.786439 systemd[1]: Starting ignition-mount.service... Dec 13 14:33:14.793373 systemd[1]: Starting sysroot-boot.service... Dec 13 14:33:14.804001 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:33:14.804134 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:33:14.850180 ignition[1141]: INFO : Ignition 2.14.0 Dec 13 14:33:14.850180 ignition[1141]: INFO : Stage: mount Dec 13 14:33:14.854668 ignition[1141]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:14.854668 ignition[1141]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:14.879665 systemd[1]: Finished sysroot-boot.service. Dec 13 14:33:14.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.890982 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:14.890982 ignition[1141]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:14.897197 kernel: audit: type=1130 audit(1734100394.885:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.897520 ignition[1141]: INFO : PUT result: OK Dec 13 14:33:14.900507 ignition[1141]: INFO : mount: mount passed Dec 13 14:33:14.900507 ignition[1141]: INFO : Ignition finished successfully Dec 13 14:33:14.910742 systemd[1]: Finished ignition-mount.service. Dec 13 14:33:14.932227 kernel: audit: type=1130 audit(1734100394.915:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:14.927400 systemd[1]: Starting ignition-files.service... Dec 13 14:33:14.950989 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:33:14.980580 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1151) Dec 13 14:33:14.986952 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:33:14.987020 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:33:14.987038 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:33:15.001582 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:33:15.004176 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:33:15.017143 ignition[1170]: INFO : Ignition 2.14.0 Dec 13 14:33:15.017143 ignition[1170]: INFO : Stage: files Dec 13 14:33:15.019266 ignition[1170]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:15.019266 ignition[1170]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:15.030061 ignition[1170]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:15.031898 ignition[1170]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:15.034028 ignition[1170]: INFO : PUT result: OK Dec 13 14:33:15.038121 ignition[1170]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:33:15.042190 ignition[1170]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:33:15.042190 ignition[1170]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:33:15.047948 ignition[1170]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:33:15.050066 ignition[1170]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:33:15.053900 unknown[1170]: wrote ssh authorized keys file for user: core Dec 13 14:33:15.055890 ignition[1170]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:33:15.059097 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:33:15.061872 ignition[1170]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:33:15.174002 ignition[1170]: INFO : GET result: OK Dec 13 14:33:15.345552 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:33:15.348248 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:33:15.348248 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:33:15.348248 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:33:15.348248 ignition[1170]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:33:15.358975 ignition[1170]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3756183432" Dec 13 14:33:15.363632 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1170) Dec 13 14:33:15.363661 ignition[1170]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3756183432": device or resource busy Dec 13 14:33:15.363661 ignition[1170]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3756183432", trying btrfs: device or resource busy Dec 13 14:33:15.363661 ignition[1170]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3756183432" Dec 13 14:33:15.363661 ignition[1170]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3756183432" Dec 13 14:33:15.363661 ignition[1170]: INFO : op(3): [started] unmounting "/mnt/oem3756183432" Dec 13 14:33:15.363661 ignition[1170]: INFO : op(3): [finished] unmounting "/mnt/oem3756183432" Dec 13 14:33:15.363661 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:33:15.389987 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:33:15.389987 ignition[1170]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:33:15.421033 systemd-networkd[1014]: eth0: Gained IPv6LL Dec 13 14:33:15.820242 ignition[1170]: INFO : GET result: OK Dec 13 14:33:15.955971 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:33:15.958352 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:33:15.961335 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:33:15.961335 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:33:15.966741 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:33:15.966741 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:33:15.974832 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:33:15.974832 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:33:15.974832 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:33:15.974832 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:33:15.974832 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:33:15.974832 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:33:15.974832 ignition[1170]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:33:16.021073 ignition[1170]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1658122525" Dec 13 14:33:16.021073 ignition[1170]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1658122525": device or resource busy Dec 13 14:33:16.021073 ignition[1170]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1658122525", trying btrfs: device or resource busy Dec 13 14:33:16.021073 ignition[1170]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1658122525" Dec 13 14:33:16.021073 ignition[1170]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1658122525" Dec 13 14:33:16.021073 ignition[1170]: INFO : op(6): [started] unmounting "/mnt/oem1658122525" Dec 13 14:33:16.021073 ignition[1170]: INFO : op(6): [finished] unmounting "/mnt/oem1658122525" Dec 13 14:33:16.021073 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:33:16.021073 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:33:16.021073 ignition[1170]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:33:16.015048 systemd[1]: mnt-oem1658122525.mount: Deactivated successfully. Dec 13 14:33:16.045848 ignition[1170]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem339028702" Dec 13 14:33:16.045848 ignition[1170]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem339028702": device or resource busy Dec 13 14:33:16.045848 ignition[1170]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem339028702", trying btrfs: device or resource busy Dec 13 14:33:16.045848 ignition[1170]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem339028702" Dec 13 14:33:16.055245 ignition[1170]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem339028702" Dec 13 14:33:16.055245 ignition[1170]: INFO : op(9): [started] unmounting "/mnt/oem339028702" Dec 13 14:33:16.055245 ignition[1170]: INFO : op(9): [finished] unmounting "/mnt/oem339028702" Dec 13 14:33:16.055245 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:33:16.055245 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:33:16.055245 ignition[1170]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 14:33:16.074087 systemd[1]: mnt-oem339028702.mount: Deactivated successfully. Dec 13 14:33:16.444330 ignition[1170]: INFO : GET result: OK Dec 13 14:33:16.794370 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:33:16.794370 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:33:16.800646 ignition[1170]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:33:16.809154 ignition[1170]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4037786098" Dec 13 14:33:16.809154 ignition[1170]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4037786098": device or resource busy Dec 13 14:33:16.817834 ignition[1170]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4037786098", trying btrfs: device or resource busy Dec 13 14:33:16.817834 ignition[1170]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4037786098" Dec 13 14:33:16.822878 ignition[1170]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4037786098" Dec 13 14:33:16.826326 ignition[1170]: INFO : op(c): [started] unmounting "/mnt/oem4037786098" Dec 13 14:33:16.828870 systemd[1]: mnt-oem4037786098.mount: Deactivated successfully. Dec 13 14:33:16.834012 ignition[1170]: INFO : op(c): [finished] unmounting "/mnt/oem4037786098" Dec 13 14:33:16.835552 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:33:16.835552 ignition[1170]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:33:16.835552 ignition[1170]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:33:16.835552 ignition[1170]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:33:16.835552 ignition[1170]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:33:16.850755 ignition[1170]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:33:16.887774 ignition[1170]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:33:16.892333 ignition[1170]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:33:16.894878 ignition[1170]: INFO : files: files passed Dec 13 14:33:16.894878 ignition[1170]: INFO : Ignition finished successfully Dec 13 14:33:16.899077 systemd[1]: Finished ignition-files.service. Dec 13 14:33:16.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:16.906598 kernel: audit: type=1130 audit(1734100396.900:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:16.907616 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:33:16.910149 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:33:16.912202 systemd[1]: Starting ignition-quench.service... Dec 13 14:33:16.920282 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:33:16.920866 systemd[1]: Finished ignition-quench.service. Dec 13 14:33:16.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:16.932900 kernel: audit: type=1130 audit(1734100396.926:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:16.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:16.930973 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:33:16.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:16.935469 initrd-setup-root-after-ignition[1195]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:33:16.933034 systemd[1]: Reached target ignition-complete.target. Dec 13 14:33:16.934493 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:33:16.963308 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:33:16.963452 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:33:16.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:16.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:16.966405 systemd[1]: Reached target initrd-fs.target. Dec 13 14:33:16.969914 systemd[1]: Reached target initrd.target. Dec 13 14:33:16.970150 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:33:16.971356 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:33:16.990492 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:33:16.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:16.994269 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:33:17.008119 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:33:17.009956 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:33:17.012294 systemd[1]: Stopped target timers.target. Dec 13 14:33:17.014595 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:33:17.016120 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:33:17.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.018890 systemd[1]: Stopped target initrd.target. Dec 13 14:33:17.021578 systemd[1]: Stopped target basic.target. Dec 13 14:33:17.024395 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:33:17.027428 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:33:17.029420 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:33:17.031716 systemd[1]: Stopped target remote-fs.target. Dec 13 14:33:17.032903 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:33:17.034018 systemd[1]: Stopped target sysinit.target. Dec 13 14:33:17.036404 systemd[1]: Stopped target local-fs.target. Dec 13 14:33:17.038572 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:33:17.041377 systemd[1]: Stopped target swap.target. Dec 13 14:33:17.043936 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:33:17.044109 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:33:17.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.047706 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:33:17.050087 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:33:17.051612 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:33:17.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.054160 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:33:17.055577 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:33:17.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.073821 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:33:17.076102 systemd[1]: Stopped ignition-files.service. Dec 13 14:33:17.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.135384 iscsid[1019]: iscsid shutting down. Dec 13 14:33:17.108794 systemd[1]: Stopping ignition-mount.service... Dec 13 14:33:17.136756 systemd[1]: Stopping iscsid.service... Dec 13 14:33:17.142837 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:33:17.145266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:33:17.146799 ignition[1208]: INFO : Ignition 2.14.0 Dec 13 14:33:17.146799 ignition[1208]: INFO : Stage: umount Dec 13 14:33:17.146799 ignition[1208]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:17.146799 ignition[1208]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:17.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.146886 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:33:17.159342 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:33:17.164581 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:33:17.170190 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:17.170190 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:17.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.172255 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:33:17.177169 ignition[1208]: INFO : PUT result: OK Dec 13 14:33:17.172536 systemd[1]: Stopped iscsid.service. Dec 13 14:33:17.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.181957 systemd[1]: Stopping iscsiuio.service... Dec 13 14:33:17.186760 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:33:17.187256 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:33:17.191068 ignition[1208]: INFO : umount: umount passed Dec 13 14:33:17.191068 ignition[1208]: INFO : Ignition finished successfully Dec 13 14:33:17.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.195728 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:33:17.196097 systemd[1]: Stopped ignition-mount.service. Dec 13 14:33:17.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.199526 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:33:17.200032 systemd[1]: Stopped iscsiuio.service. Dec 13 14:33:17.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.207174 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:33:17.207271 systemd[1]: Stopped ignition-disks.service. Dec 13 14:33:17.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.210477 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:33:17.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.210550 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:33:17.212097 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:33:17.212165 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:33:17.213492 systemd[1]: Stopped target network.target. Dec 13 14:33:17.222765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:33:17.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.222864 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:33:17.229488 systemd[1]: Stopped target paths.target. Dec 13 14:33:17.231631 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:33:17.245769 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:33:17.251837 systemd[1]: Stopped target slices.target. Dec 13 14:33:17.253095 systemd[1]: Stopped target sockets.target. Dec 13 14:33:17.254502 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:33:17.254586 systemd[1]: Closed iscsid.socket. Dec 13 14:33:17.255523 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:33:17.255598 systemd[1]: Closed iscsiuio.socket. Dec 13 14:33:17.256888 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:33:17.256962 systemd[1]: Stopped ignition-setup.service. Dec 13 14:33:17.259195 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:33:17.267058 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:33:17.278758 systemd-networkd[1014]: eth0: DHCPv6 lease lost Dec 13 14:33:17.283734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:33:17.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.291000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:33:17.291000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:33:17.285863 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:33:17.285987 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:33:17.288643 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:33:17.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.288738 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:33:17.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.290281 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:33:17.290351 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:33:17.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.297858 systemd[1]: Stopping network-cleanup.service... Dec 13 14:33:17.300285 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:33:17.300544 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:33:17.303870 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:33:17.303943 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:33:17.305766 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:33:17.305830 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:33:17.310391 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:33:17.321957 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:33:17.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.333972 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:33:17.339116 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:33:17.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.349586 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:33:17.349702 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:33:17.352648 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:33:17.352709 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:33:17.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.355613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:33:17.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.355669 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:33:17.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.358165 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:33:17.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.358368 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:33:17.364341 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:33:17.364509 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:33:17.367767 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:33:17.367838 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:33:17.370135 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:33:17.370202 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:33:17.385313 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:33:17.399663 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:33:17.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.399761 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:33:17.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.403308 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:33:17.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.403380 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:33:17.404376 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:33:17.404430 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:33:17.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.409537 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:33:17.410421 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:33:17.410569 systemd[1]: Stopped network-cleanup.service. Dec 13 14:33:17.411144 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:33:17.411256 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:33:17.411537 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:33:17.412745 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:33:17.445073 systemd[1]: Switching root. Dec 13 14:33:17.466795 systemd-journald[185]: Journal stopped Dec 13 14:33:23.071860 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 14:33:23.071937 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:33:23.072038 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:33:23.072056 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:33:23.072078 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:33:23.072095 kernel: SELinux: policy capability open_perms=1 Dec 13 14:33:23.072111 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:33:23.072128 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:33:23.072150 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:33:23.072167 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:33:23.072189 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:33:23.083628 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:33:23.083669 systemd[1]: Successfully loaded SELinux policy in 97.432ms. Dec 13 14:33:23.083710 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.530ms. Dec 13 14:33:23.083763 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:33:23.083785 systemd[1]: Detected virtualization amazon. Dec 13 14:33:23.083805 systemd[1]: Detected architecture x86-64. Dec 13 14:33:23.083830 systemd[1]: Detected first boot. Dec 13 14:33:23.083850 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:33:23.083870 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:33:23.083888 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:33:23.083909 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:33:23.083936 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:33:23.083959 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:33:23.083983 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 14:33:23.084001 kernel: audit: type=1334 audit(1734100402.659:89): prog-id=12 op=LOAD Dec 13 14:33:23.084024 kernel: audit: type=1334 audit(1734100402.659:90): prog-id=3 op=UNLOAD Dec 13 14:33:23.084043 kernel: audit: type=1334 audit(1734100402.661:91): prog-id=13 op=LOAD Dec 13 14:33:23.084062 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:33:23.084082 kernel: audit: type=1334 audit(1734100402.662:92): prog-id=14 op=LOAD Dec 13 14:33:23.084100 kernel: audit: type=1334 audit(1734100402.662:93): prog-id=4 op=UNLOAD Dec 13 14:33:23.084121 kernel: audit: type=1334 audit(1734100402.662:94): prog-id=5 op=UNLOAD Dec 13 14:33:23.084141 kernel: audit: type=1131 audit(1734100402.664:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.084159 kernel: audit: type=1334 audit(1734100402.673:96): prog-id=12 op=UNLOAD Dec 13 14:33:23.084178 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:33:23.084199 kernel: audit: type=1130 audit(1734100402.677:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.084218 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:33:23.084239 kernel: audit: type=1131 audit(1734100402.677:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.084263 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:33:23.084282 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:33:23.084304 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:33:23.084324 systemd[1]: Created slice system-getty.slice. Dec 13 14:33:23.084345 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:33:23.084365 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:33:23.084386 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:33:23.084406 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:33:23.084429 systemd[1]: Created slice user.slice. Dec 13 14:33:23.084455 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:33:23.084475 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:33:23.084500 systemd[1]: Set up automount boot.automount. Dec 13 14:33:23.084521 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:33:23.084541 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:33:23.085071 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:33:23.085103 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:33:23.085125 systemd[1]: Reached target integritysetup.target. Dec 13 14:33:23.085149 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:33:23.085171 systemd[1]: Reached target remote-fs.target. Dec 13 14:33:23.085190 systemd[1]: Reached target slices.target. Dec 13 14:33:23.085207 systemd[1]: Reached target swap.target. Dec 13 14:33:23.085224 systemd[1]: Reached target torcx.target. Dec 13 14:33:23.085242 systemd[1]: Reached target veritysetup.target. Dec 13 14:33:23.085259 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:33:23.085277 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:33:23.085294 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:33:23.085312 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:33:23.085332 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:33:23.085349 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:33:23.085367 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:33:23.085384 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:33:23.085401 systemd[1]: Mounting media.mount... Dec 13 14:33:23.085421 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:23.085439 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:33:23.085457 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:33:23.085475 systemd[1]: Mounting tmp.mount... Dec 13 14:33:23.085494 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:33:23.085514 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:23.085531 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:33:23.088623 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:33:23.088664 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:33:23.088683 systemd[1]: Starting modprobe@drm.service... Dec 13 14:33:23.088702 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:33:23.088721 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:33:23.088738 systemd[1]: Starting modprobe@loop.service... Dec 13 14:33:23.088757 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:33:23.088776 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:33:23.088794 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:33:23.088813 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:33:23.088834 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:33:23.088851 systemd[1]: Stopped systemd-journald.service. Dec 13 14:33:23.088869 systemd[1]: Starting systemd-journald.service... Dec 13 14:33:23.088886 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:33:23.088904 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:33:23.088922 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:33:23.088940 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:33:23.088958 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:33:23.089069 systemd[1]: Stopped verity-setup.service. Dec 13 14:33:23.089093 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:23.089114 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:33:23.089133 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:33:23.089150 systemd[1]: Mounted media.mount. Dec 13 14:33:23.089169 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:33:23.089187 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:33:23.089205 systemd[1]: Mounted tmp.mount. Dec 13 14:33:23.089224 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:33:23.089240 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:33:23.089256 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:33:23.089278 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:33:23.089297 systemd[1]: Finished modprobe@drm.service. Dec 13 14:33:23.089313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:33:23.089329 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:33:23.089347 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:33:23.089366 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:33:23.089426 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:33:23.089447 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:33:23.089552 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:33:23.123723 systemd[1]: Reached target network-pre.target. Dec 13 14:33:23.123751 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:33:23.123771 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:33:23.123791 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:33:23.123815 kernel: loop: module loaded Dec 13 14:33:23.123835 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:33:23.123857 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:33:23.123881 systemd-journald[1322]: Journal started Dec 13 14:33:23.123964 systemd-journald[1322]: Runtime Journal (/run/log/journal/ec21875dd8b32e7192076941f0bfa4aa) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:33:23.131365 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:33:17.905000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:33:18.040000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:33:18.040000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:33:18.040000 audit: BPF prog-id=10 op=LOAD Dec 13 14:33:18.040000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:33:18.040000 audit: BPF prog-id=11 op=LOAD Dec 13 14:33:18.040000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:33:18.205000 audit[1242]: AVC avc: denied { associate } for pid=1242 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:33:18.205000 audit[1242]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1225 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:18.205000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:33:18.208000 audit[1242]: AVC avc: denied { associate } for pid=1242 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:33:18.208000 audit[1242]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179b9 a2=1ed a3=0 items=2 ppid=1225 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:18.208000 audit: CWD cwd="/" Dec 13 14:33:18.208000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:18.208000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:18.208000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:33:22.659000 audit: BPF prog-id=12 op=LOAD Dec 13 14:33:22.659000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:33:22.661000 audit: BPF prog-id=13 op=LOAD Dec 13 14:33:22.662000 audit: BPF prog-id=14 op=LOAD Dec 13 14:33:22.662000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:33:22.662000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:33:22.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:22.673000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:33:22.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:22.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:22.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:22.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:22.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:22.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:22.928000 audit: BPF prog-id=15 op=LOAD Dec 13 14:33:22.928000 audit: BPF prog-id=16 op=LOAD Dec 13 14:33:22.928000 audit: BPF prog-id=17 op=LOAD Dec 13 14:33:22.928000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:33:23.141947 systemd[1]: Started systemd-journald.service. Dec 13 14:33:22.928000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:33:23.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.069000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:33:23.069000 audit[1322]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd71896680 a2=4000 a3=7ffd7189671c items=0 ppid=1 pid=1322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:23.069000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:33:23.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.159815 systemd-journald[1322]: Time spent on flushing to /var/log/journal/ec21875dd8b32e7192076941f0bfa4aa is 154.896ms for 1157 entries. Dec 13 14:33:23.159815 systemd-journald[1322]: System Journal (/var/log/journal/ec21875dd8b32e7192076941f0bfa4aa) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:33:23.371821 systemd-journald[1322]: Received client request to flush runtime journal. Dec 13 14:33:23.371963 kernel: fuse: init (API version 7.34) Dec 13 14:33:23.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:22.657864 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:33:18.202860 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:33:22.664467 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:33:18.203602 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:33:23.141438 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:33:18.203630 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:33:23.141766 systemd[1]: Finished modprobe@loop.service. Dec 13 14:33:18.203676 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:33:23.143461 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:33:18.203693 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:33:23.147110 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:33:18.203741 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:33:23.148305 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:33:18.203763 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:33:23.177877 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:33:23.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:18.204038 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:33:23.187509 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:33:18.204094 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:33:23.202115 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:33:18.204114 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:33:23.203054 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:33:18.204803 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:33:23.212266 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:33:18.204861 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:33:23.223793 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:33:18.204891 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:33:23.248097 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:33:18.204915 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:33:23.340721 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:33:18.204940 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:33:23.343433 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:33:18.204962 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:33:23.356142 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:33:21.853810 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:21Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:33:23.358951 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:33:21.854074 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:21Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:33:23.374180 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:33:21.854179 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:21Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:33:21.854463 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:21Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:33:21.854514 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:21Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:33:21.854598 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-12-13T14:33:21Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:33:23.388165 udevadm[1358]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:33:23.498524 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:33:23.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:23.501215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:33:23.584233 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:33:23.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:24.278237 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:33:24.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:24.279000 audit: BPF prog-id=18 op=LOAD Dec 13 14:33:24.279000 audit: BPF prog-id=19 op=LOAD Dec 13 14:33:24.279000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:33:24.279000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:33:24.281194 systemd[1]: Starting systemd-udevd.service... Dec 13 14:33:24.307672 systemd-udevd[1361]: Using default interface naming scheme 'v252'. Dec 13 14:33:24.352267 systemd[1]: Started systemd-udevd.service. Dec 13 14:33:24.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:24.355000 audit: BPF prog-id=20 op=LOAD Dec 13 14:33:24.356936 systemd[1]: Starting systemd-networkd.service... Dec 13 14:33:24.367000 audit: BPF prog-id=21 op=LOAD Dec 13 14:33:24.367000 audit: BPF prog-id=22 op=LOAD Dec 13 14:33:24.367000 audit: BPF prog-id=23 op=LOAD Dec 13 14:33:24.369383 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:33:24.455215 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:33:24.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:24.482290 systemd[1]: Started systemd-userdbd.service. Dec 13 14:33:24.485055 (udev-worker)[1373]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:24.628922 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 14:33:24.622000 audit[1367]: AVC avc: denied { confidentiality } for pid=1367 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:33:24.622000 audit[1367]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5559fcd385f0 a1=337fc a2=7fce2e09fbc5 a3=5 items=110 ppid=1361 pid=1367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:24.622000 audit: CWD cwd="/" Dec 13 14:33:24.622000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=1 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.653940 systemd-networkd[1370]: lo: Link UP Dec 13 14:33:24.653950 systemd-networkd[1370]: lo: Gained carrier Dec 13 14:33:24.654519 systemd-networkd[1370]: Enumeration completed Dec 13 14:33:24.654658 systemd[1]: Started systemd-networkd.service. Dec 13 14:33:24.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:24.654863 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:33:24.622000 audit: PATH item=2 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=3 name=(null) inode=14169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.658526 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:33:24.622000 audit: PATH item=4 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=5 name=(null) inode=14170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=6 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=7 name=(null) inode=14171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=8 name=(null) inode=14171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=9 name=(null) inode=14172 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=10 name=(null) inode=14171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=11 name=(null) inode=14173 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=12 name=(null) inode=14171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=13 name=(null) inode=14174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=14 name=(null) inode=14171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=15 name=(null) inode=14175 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=16 name=(null) inode=14171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=17 name=(null) inode=14176 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=18 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=19 name=(null) inode=14177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=20 name=(null) inode=14177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=21 name=(null) inode=14178 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=22 name=(null) inode=14177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=23 name=(null) inode=14179 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=24 name=(null) inode=14177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=25 name=(null) inode=14180 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=26 name=(null) inode=14177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=27 name=(null) inode=14181 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=28 name=(null) inode=14177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=29 name=(null) inode=14182 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=30 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=31 name=(null) inode=14183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=32 name=(null) inode=14183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=33 name=(null) inode=14184 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=34 name=(null) inode=14183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=35 name=(null) inode=14185 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=36 name=(null) inode=14183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=37 name=(null) inode=14186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=38 name=(null) inode=14183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=39 name=(null) inode=14187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=40 name=(null) inode=14183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=41 name=(null) inode=14188 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=42 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=43 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=44 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=45 name=(null) inode=14190 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=46 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=47 name=(null) inode=14191 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=48 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=49 name=(null) inode=14192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=50 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=51 name=(null) inode=14193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=52 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=53 name=(null) inode=14194 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=55 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=56 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=57 name=(null) inode=14196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=58 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=59 name=(null) inode=14197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=60 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=61 name=(null) inode=14198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=62 name=(null) inode=14198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=63 name=(null) inode=14199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=64 name=(null) inode=14198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=65 name=(null) inode=14200 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=66 name=(null) inode=14198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=67 name=(null) inode=14201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=68 name=(null) inode=14198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=69 name=(null) inode=14202 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=70 name=(null) inode=14198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=71 name=(null) inode=14203 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=72 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=73 name=(null) inode=14204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=74 name=(null) inode=14204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=75 name=(null) inode=14205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=76 name=(null) inode=14204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=77 name=(null) inode=14206 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=78 name=(null) inode=14204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=79 name=(null) inode=14207 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=80 name=(null) inode=14204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=81 name=(null) inode=14208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=82 name=(null) inode=14204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=83 name=(null) inode=14209 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=84 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=85 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=86 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=87 name=(null) inode=14211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=88 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=89 name=(null) inode=14212 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=90 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.669431 systemd-networkd[1370]: eth0: Link UP Dec 13 14:33:24.669588 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:33:24.622000 audit: PATH item=91 name=(null) inode=14213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.669660 systemd-networkd[1370]: eth0: Gained carrier Dec 13 14:33:24.622000 audit: PATH item=92 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=93 name=(null) inode=14214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=94 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=95 name=(null) inode=14215 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=96 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=97 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=98 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=99 name=(null) inode=14217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=100 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=101 name=(null) inode=14218 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=102 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=103 name=(null) inode=14219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=104 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=105 name=(null) inode=14220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=106 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=107 name=(null) inode=14221 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PATH item=109 name=(null) inode=14222 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:24.622000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:33:24.681786 systemd-networkd[1370]: eth0: DHCPv4 address 172.31.18.151/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:33:24.687632 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:33:24.697626 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 14:33:24.700601 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 14:33:24.710594 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 14:33:24.719583 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:33:24.740585 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:33:24.783660 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1376) Dec 13 14:33:24.918139 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:33:24.983081 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:33:24.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:24.985714 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:33:25.059094 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:33:25.088849 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:33:25.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.090214 systemd[1]: Reached target cryptsetup.target. Dec 13 14:33:25.093272 systemd[1]: Starting lvm2-activation.service... Dec 13 14:33:25.101956 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:33:25.135275 systemd[1]: Finished lvm2-activation.service. Dec 13 14:33:25.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.136869 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:33:25.137789 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:33:25.137827 systemd[1]: Reached target local-fs.target. Dec 13 14:33:25.139033 systemd[1]: Reached target machines.target. Dec 13 14:33:25.142209 systemd[1]: Starting ldconfig.service... Dec 13 14:33:25.144475 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:25.144617 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:25.146467 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:33:25.149255 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:33:25.153487 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:33:25.159385 systemd[1]: Starting systemd-sysext.service... Dec 13 14:33:25.165511 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1478 (bootctl) Dec 13 14:33:25.168242 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:33:25.207479 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:33:25.220914 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:33:25.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.222829 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:33:25.223414 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:33:25.244595 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 14:33:25.403793 systemd-fsck[1488]: fsck.fat 4.2 (2021-01-31) Dec 13 14:33:25.403793 systemd-fsck[1488]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 14:33:25.409135 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:33:25.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.413167 systemd[1]: Mounting boot.mount... Dec 13 14:33:25.437080 systemd[1]: Mounted boot.mount. Dec 13 14:33:25.476692 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:33:25.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.531577 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:33:25.567590 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 14:33:25.586856 (sd-sysext)[1504]: Using extensions 'kubernetes'. Dec 13 14:33:25.587385 (sd-sysext)[1504]: Merged extensions into '/usr'. Dec 13 14:33:25.615250 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:25.617446 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:33:25.619066 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:25.626424 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:33:25.632582 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:33:25.635865 systemd[1]: Starting modprobe@loop.service... Dec 13 14:33:25.637185 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:25.637373 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:25.637542 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:25.643349 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:33:25.645446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:33:25.645739 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:33:25.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.647739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:33:25.647906 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:33:25.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.649721 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:33:25.649878 systemd[1]: Finished modprobe@loop.service. Dec 13 14:33:25.651574 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:33:25.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.651723 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:33:25.653835 systemd[1]: Finished systemd-sysext.service. Dec 13 14:33:25.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:25.660441 systemd[1]: Starting ensure-sysext.service... Dec 13 14:33:25.663869 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:33:25.677645 systemd[1]: Reloading. Dec 13 14:33:25.716062 systemd-tmpfiles[1511]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:33:25.722704 systemd-tmpfiles[1511]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:33:25.732208 systemd-tmpfiles[1511]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:33:25.882224 /usr/lib/systemd/system-generators/torcx-generator[1531]: time="2024-12-13T14:33:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:33:25.882311 /usr/lib/systemd/system-generators/torcx-generator[1531]: time="2024-12-13T14:33:25Z" level=info msg="torcx already run" Dec 13 14:33:25.916664 systemd-networkd[1370]: eth0: Gained IPv6LL Dec 13 14:33:26.102615 ldconfig[1477]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:33:26.105084 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:33:26.105107 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:33:26.127234 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:33:26.210000 audit: BPF prog-id=24 op=LOAD Dec 13 14:33:26.210000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:33:26.210000 audit: BPF prog-id=25 op=LOAD Dec 13 14:33:26.210000 audit: BPF prog-id=26 op=LOAD Dec 13 14:33:26.210000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:33:26.210000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:33:26.213000 audit: BPF prog-id=27 op=LOAD Dec 13 14:33:26.213000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:33:26.215000 audit: BPF prog-id=28 op=LOAD Dec 13 14:33:26.215000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:33:26.215000 audit: BPF prog-id=29 op=LOAD Dec 13 14:33:26.215000 audit: BPF prog-id=30 op=LOAD Dec 13 14:33:26.215000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:33:26.215000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:33:26.217000 audit: BPF prog-id=31 op=LOAD Dec 13 14:33:26.217000 audit: BPF prog-id=32 op=LOAD Dec 13 14:33:26.217000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:33:26.217000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:33:26.221345 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:33:26.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.223007 systemd[1]: Finished ldconfig.service. Dec 13 14:33:26.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.225020 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:33:26.236970 systemd[1]: Starting audit-rules.service... Dec 13 14:33:26.240930 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:33:26.247000 audit: BPF prog-id=33 op=LOAD Dec 13 14:33:26.257000 audit: BPF prog-id=34 op=LOAD Dec 13 14:33:26.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.244584 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:33:26.249506 systemd[1]: Starting systemd-resolved.service... Dec 13 14:33:26.265287 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:33:26.278048 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:33:26.281004 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:33:26.287962 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:33:26.295770 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:26.296178 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.299282 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:33:26.302738 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:33:26.307780 systemd[1]: Starting modprobe@loop.service... Dec 13 14:33:26.309323 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.309630 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:26.309997 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:33:26.310171 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:26.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.319000 audit[1590]: SYSTEM_BOOT pid=1590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.312664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:33:26.313013 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:33:26.315390 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:33:26.315608 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:33:26.317032 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:33:26.325425 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:26.326006 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.330202 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:33:26.335630 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:33:26.337629 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.338494 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:26.339409 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:33:26.339757 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:26.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.344944 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:33:26.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.345283 systemd[1]: Finished modprobe@loop.service. Dec 13 14:33:26.348411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:33:26.348795 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:33:26.351326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:33:26.351837 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:33:26.354716 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:33:26.355219 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.361050 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:33:26.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.370955 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:26.371996 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.374839 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:33:26.379306 systemd[1]: Starting modprobe@drm.service... Dec 13 14:33:26.382446 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:33:26.389703 systemd[1]: Starting modprobe@loop.service... Dec 13 14:33:26.391067 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.391305 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:26.391573 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:33:26.391732 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:26.393578 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:33:26.393891 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:33:26.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.399065 systemd[1]: Finished ensure-sysext.service. Dec 13 14:33:26.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.403758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:33:26.403992 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:33:26.406549 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:33:26.406796 systemd[1]: Finished modprobe@loop.service. Dec 13 14:33:26.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.408050 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:33:26.408103 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.418372 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:33:26.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.418579 systemd[1]: Finished modprobe@drm.service. Dec 13 14:33:26.449872 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:33:26.450687 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:33:26.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.452776 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:33:26.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.456510 systemd[1]: Starting systemd-update-done.service... Dec 13 14:33:26.468680 systemd[1]: Finished systemd-update-done.service. Dec 13 14:33:26.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.517487 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:33:26.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.521878 systemd[1]: Reached target time-set.target. Dec 13 14:33:26.522000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:33:26.522000 audit[1616]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffffa0d9420 a2=420 a3=0 items=0 ppid=1584 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:26.522000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:33:26.523377 augenrules[1616]: No rules Dec 13 14:33:26.523915 systemd[1]: Finished audit-rules.service. Dec 13 14:33:26.535899 systemd-resolved[1588]: Positive Trust Anchors: Dec 13 14:33:26.535917 systemd-resolved[1588]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:33:26.535959 systemd-resolved[1588]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:33:26.570501 systemd-resolved[1588]: Defaulting to hostname 'linux'. Dec 13 14:33:26.572482 systemd[1]: Started systemd-resolved.service. Dec 13 14:33:26.573527 systemd[1]: Reached target network.target. Dec 13 14:33:26.574394 systemd[1]: Reached target network-online.target. Dec 13 14:33:26.575362 systemd[1]: Reached target nss-lookup.target. Dec 13 14:33:26.576222 systemd[1]: Reached target sysinit.target. Dec 13 14:33:26.580543 systemd[1]: Started motdgen.path. Dec 13 14:33:26.581490 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:33:26.583570 systemd[1]: Started logrotate.timer. Dec 13 14:33:26.584588 systemd[1]: Started mdadm.timer. Dec 13 14:33:26.585594 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:33:26.586797 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:33:26.586821 systemd[1]: Reached target paths.target. Dec 13 14:33:26.587877 systemd[1]: Reached target timers.target. Dec 13 14:33:26.589099 systemd[1]: Listening on dbus.socket. Dec 13 14:33:26.591070 systemd[1]: Starting docker.socket... Dec 13 14:33:26.594916 systemd[1]: Listening on sshd.socket. Dec 13 14:33:26.596314 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:26.597111 systemd[1]: Listening on docker.socket. Dec 13 14:33:26.598267 systemd[1]: Reached target sockets.target. Dec 13 14:33:26.599472 systemd[1]: Reached target basic.target. Dec 13 14:33:26.600483 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.600507 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:33:26.601889 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:33:26.605083 systemd[1]: Starting containerd.service... Dec 13 14:33:26.616440 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:33:27.931572 systemd-timesyncd[1589]: Contacted time server 205.233.73.201:123 (0.flatcar.pool.ntp.org). Dec 13 14:33:27.931633 systemd-timesyncd[1589]: Initial clock synchronization to Fri 2024-12-13 14:33:27.931457 UTC. Dec 13 14:33:27.932016 systemd-resolved[1588]: Clock change detected. Flushing caches. Dec 13 14:33:27.932400 systemd[1]: Starting dbus.service... Dec 13 14:33:27.935821 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:33:27.939366 systemd[1]: Starting extend-filesystems.service... Dec 13 14:33:27.946508 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:33:27.949262 systemd[1]: Starting kubelet.service... Dec 13 14:33:27.953678 systemd[1]: Starting motdgen.service... Dec 13 14:33:27.957584 systemd[1]: Started nvidia.service. Dec 13 14:33:27.961303 systemd[1]: Starting prepare-helm.service... Dec 13 14:33:27.964147 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:33:27.969082 systemd[1]: Starting sshd-keygen.service... Dec 13 14:33:27.976978 systemd[1]: Starting systemd-logind.service... Dec 13 14:33:27.979542 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:27.979846 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:33:27.981078 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:33:27.982295 systemd[1]: Starting update-engine.service... Dec 13 14:33:27.996683 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:33:28.091349 jq[1628]: false Dec 13 14:33:28.092127 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:33:28.093228 jq[1638]: true Dec 13 14:33:28.092403 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:33:28.175987 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:33:28.176237 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:33:28.212550 tar[1641]: linux-amd64/helm Dec 13 14:33:28.227661 jq[1650]: true Dec 13 14:33:28.261961 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:33:28.263017 systemd[1]: Finished motdgen.service. Dec 13 14:33:28.288940 dbus-daemon[1627]: [system] SELinux support is enabled Dec 13 14:33:28.289154 systemd[1]: Started dbus.service. Dec 13 14:33:28.299464 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:33:28.299547 systemd[1]: Reached target system-config.target. Dec 13 14:33:28.301512 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:33:28.301541 systemd[1]: Reached target user-config.target. Dec 13 14:33:28.325365 dbus-daemon[1627]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1370 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:33:28.335675 extend-filesystems[1629]: Found loop1 Dec 13 14:33:28.337600 extend-filesystems[1629]: Found nvme0n1 Dec 13 14:33:28.337600 extend-filesystems[1629]: Found nvme0n1p1 Dec 13 14:33:28.337600 extend-filesystems[1629]: Found nvme0n1p2 Dec 13 14:33:28.337600 extend-filesystems[1629]: Found nvme0n1p3 Dec 13 14:33:28.337600 extend-filesystems[1629]: Found usr Dec 13 14:33:28.337600 extend-filesystems[1629]: Found nvme0n1p4 Dec 13 14:33:28.337600 extend-filesystems[1629]: Found nvme0n1p6 Dec 13 14:33:28.337600 extend-filesystems[1629]: Found nvme0n1p7 Dec 13 14:33:28.337600 extend-filesystems[1629]: Found nvme0n1p9 Dec 13 14:33:28.337600 extend-filesystems[1629]: Checking size of /dev/nvme0n1p9 Dec 13 14:33:28.351641 amazon-ssm-agent[1624]: 2024/12/13 14:33:28 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:33:28.351641 amazon-ssm-agent[1624]: Initializing new seelog logger Dec 13 14:33:28.351641 amazon-ssm-agent[1624]: New Seelog Logger Creation Complete Dec 13 14:33:28.351641 amazon-ssm-agent[1624]: 2024/12/13 14:33:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:33:28.351641 amazon-ssm-agent[1624]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:33:28.342625 dbus-daemon[1627]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:33:28.353353 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:33:28.369261 amazon-ssm-agent[1624]: 2024/12/13 14:33:28 processing appconfig overrides Dec 13 14:33:28.434575 systemd[1]: Created slice system-sshd.slice. Dec 13 14:33:28.444015 update_engine[1637]: I1213 14:33:28.443295 1637 main.cc:92] Flatcar Update Engine starting Dec 13 14:33:28.454354 systemd[1]: Started update-engine.service. Dec 13 14:33:28.454624 update_engine[1637]: I1213 14:33:28.454598 1637 update_check_scheduler.cc:74] Next update check in 8m57s Dec 13 14:33:28.457923 systemd[1]: Started locksmithd.service. Dec 13 14:33:28.484673 extend-filesystems[1629]: Resized partition /dev/nvme0n1p9 Dec 13 14:33:28.497691 extend-filesystems[1698]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:33:28.512401 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:33:28.604259 bash[1691]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:33:28.610403 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:33:28.644860 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:33:28.682444 env[1647]: time="2024-12-13T14:33:28.682095458Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:33:28.684580 systemd-logind[1636]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:33:28.684613 systemd-logind[1636]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 13 14:33:28.684667 systemd-logind[1636]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:33:28.687231 extend-filesystems[1698]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:33:28.687231 extend-filesystems[1698]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:33:28.687231 extend-filesystems[1698]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:33:28.696756 extend-filesystems[1629]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:33:28.689713 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:33:28.690598 systemd[1]: Finished extend-filesystems.service. Dec 13 14:33:28.696779 systemd-logind[1636]: New seat seat0. Dec 13 14:33:28.717067 systemd[1]: Started systemd-logind.service. Dec 13 14:33:28.874484 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:33:28.896124 env[1647]: time="2024-12-13T14:33:28.895778043Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:33:28.896124 env[1647]: time="2024-12-13T14:33:28.895993942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:28.914221 env[1647]: time="2024-12-13T14:33:28.913337754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:33:28.914221 env[1647]: time="2024-12-13T14:33:28.913422462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:28.914221 env[1647]: time="2024-12-13T14:33:28.913731729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:33:28.914221 env[1647]: time="2024-12-13T14:33:28.913757280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:28.914221 env[1647]: time="2024-12-13T14:33:28.913776852Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:33:28.914221 env[1647]: time="2024-12-13T14:33:28.913794208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:28.914221 env[1647]: time="2024-12-13T14:33:28.913902201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:28.914626 env[1647]: time="2024-12-13T14:33:28.914276850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:28.914626 env[1647]: time="2024-12-13T14:33:28.914506008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:33:28.914626 env[1647]: time="2024-12-13T14:33:28.914533177Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:33:28.914626 env[1647]: time="2024-12-13T14:33:28.914605967Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:33:28.914626 env[1647]: time="2024-12-13T14:33:28.914623189Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:33:28.926186 dbus-daemon[1627]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:33:28.926392 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:33:28.928460 dbus-daemon[1627]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1681 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:33:28.932525 systemd[1]: Starting polkit.service... Dec 13 14:33:28.947989 env[1647]: time="2024-12-13T14:33:28.947872578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:33:28.947989 env[1647]: time="2024-12-13T14:33:28.947950090Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:33:28.947989 env[1647]: time="2024-12-13T14:33:28.947972824Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:33:28.948211 env[1647]: time="2024-12-13T14:33:28.948041355Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:33:28.948211 env[1647]: time="2024-12-13T14:33:28.948138346Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:33:28.948211 env[1647]: time="2024-12-13T14:33:28.948177741Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:33:28.948211 env[1647]: time="2024-12-13T14:33:28.948201056Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:33:28.948359 env[1647]: time="2024-12-13T14:33:28.948221316Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:33:28.948359 env[1647]: time="2024-12-13T14:33:28.948260632Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:33:28.948359 env[1647]: time="2024-12-13T14:33:28.948280533Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:33:28.948359 env[1647]: time="2024-12-13T14:33:28.948299019Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:33:28.948359 env[1647]: time="2024-12-13T14:33:28.948333391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:33:28.948567 env[1647]: time="2024-12-13T14:33:28.948514840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:33:28.948678 env[1647]: time="2024-12-13T14:33:28.948658056Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:33:28.949263 env[1647]: time="2024-12-13T14:33:28.949218003Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:33:28.949353 env[1647]: time="2024-12-13T14:33:28.949293792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.949353 env[1647]: time="2024-12-13T14:33:28.949317999Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:33:28.949532 env[1647]: time="2024-12-13T14:33:28.949510409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.949583 env[1647]: time="2024-12-13T14:33:28.949541898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.949626 env[1647]: time="2024-12-13T14:33:28.949578147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.949626 env[1647]: time="2024-12-13T14:33:28.949597261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.949626 env[1647]: time="2024-12-13T14:33:28.949617003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.949747 env[1647]: time="2024-12-13T14:33:28.949656575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.949747 env[1647]: time="2024-12-13T14:33:28.949675819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.949747 env[1647]: time="2024-12-13T14:33:28.949693532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.949747 env[1647]: time="2024-12-13T14:33:28.949714044Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:33:28.949959 env[1647]: time="2024-12-13T14:33:28.949923112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.950006 env[1647]: time="2024-12-13T14:33:28.949947173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.950006 env[1647]: time="2024-12-13T14:33:28.949983496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.950082 env[1647]: time="2024-12-13T14:33:28.950002554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:33:28.950082 env[1647]: time="2024-12-13T14:33:28.950027025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:33:28.950082 env[1647]: time="2024-12-13T14:33:28.950060536Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:33:28.950196 env[1647]: time="2024-12-13T14:33:28.950090877Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:33:28.950196 env[1647]: time="2024-12-13T14:33:28.950150260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:33:28.950609 env[1647]: time="2024-12-13T14:33:28.950512466Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:33:28.953096 env[1647]: time="2024-12-13T14:33:28.950635589Z" level=info msg="Connect containerd service" Dec 13 14:33:28.953096 env[1647]: time="2024-12-13T14:33:28.950682675Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:33:28.953096 env[1647]: time="2024-12-13T14:33:28.952336101Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:33:28.953096 env[1647]: time="2024-12-13T14:33:28.952476494Z" level=info msg="Start subscribing containerd event" Dec 13 14:33:28.953096 env[1647]: time="2024-12-13T14:33:28.952547238Z" level=info msg="Start recovering state" Dec 13 14:33:28.953096 env[1647]: time="2024-12-13T14:33:28.952837046Z" level=info msg="Start event monitor" Dec 13 14:33:28.953096 env[1647]: time="2024-12-13T14:33:28.952867037Z" level=info msg="Start snapshots syncer" Dec 13 14:33:28.953096 env[1647]: time="2024-12-13T14:33:28.952880655Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:33:28.953096 env[1647]: time="2024-12-13T14:33:28.952906672Z" level=info msg="Start streaming server" Dec 13 14:33:28.953559 env[1647]: time="2024-12-13T14:33:28.953536269Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:33:28.953683 env[1647]: time="2024-12-13T14:33:28.953649156Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:33:28.953844 systemd[1]: Started containerd.service. Dec 13 14:33:28.970156 polkitd[1726]: Started polkitd version 121 Dec 13 14:33:28.993772 env[1647]: time="2024-12-13T14:33:28.993723659Z" level=info msg="containerd successfully booted in 0.421609s" Dec 13 14:33:29.011791 polkitd[1726]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:33:29.015182 polkitd[1726]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:33:29.028442 polkitd[1726]: Finished loading, compiling and executing 2 rules Dec 13 14:33:29.029255 dbus-daemon[1627]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:33:29.029481 systemd[1]: Started polkit.service. Dec 13 14:33:29.030293 polkitd[1726]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:33:29.064462 systemd-hostnamed[1681]: Hostname set to (transient) Dec 13 14:33:29.064590 systemd-resolved[1588]: System hostname changed to 'ip-172-31-18-151'. Dec 13 14:33:29.191456 coreos-metadata[1626]: Dec 13 14:33:29.191 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:33:29.196881 coreos-metadata[1626]: Dec 13 14:33:29.196 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:33:29.197058 coreos-metadata[1626]: Dec 13 14:33:29.196 INFO Fetch successful Dec 13 14:33:29.197173 coreos-metadata[1626]: Dec 13 14:33:29.197 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:33:29.197281 coreos-metadata[1626]: Dec 13 14:33:29.197 INFO Fetch successful Dec 13 14:33:29.200590 unknown[1626]: wrote ssh authorized keys file for user: core Dec 13 14:33:29.242802 update-ssh-keys[1777]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:33:29.244478 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:33:29.277842 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Create new startup processor Dec 13 14:33:29.281607 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:33:29.282248 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Initializing bookkeeping folders Dec 13 14:33:29.282403 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO removing the completed state files Dec 13 14:33:29.283834 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:33:29.283938 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:33:29.284028 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Initializing healthcheck folders for long running plugins Dec 13 14:33:29.285224 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Initializing locations for inventory plugin Dec 13 14:33:29.285339 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Initializing default location for custom inventory Dec 13 14:33:29.285446 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Initializing default location for file inventory Dec 13 14:33:29.285511 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Initializing default location for role inventory Dec 13 14:33:29.285568 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Init the cloudwatchlogs publisher Dec 13 14:33:29.285631 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:33:29.285705 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:33:29.285843 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:33:29.285915 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:33:29.285980 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:33:29.286050 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:33:29.287118 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:33:29.287226 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:33:29.287291 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:33:29.287352 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:33:29.287441 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:33:29.287515 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO OS: linux, Arch: amd64 Dec 13 14:33:29.289084 amazon-ssm-agent[1624]: datastore file /var/lib/amazon/ssm/i-0c4a1b1681e7c2ef8/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:33:29.382360 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:33:29.491580 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:33:29.593668 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:33:29.688320 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:33:29.783041 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:33:29.881182 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [instanceID=i-0c4a1b1681e7c2ef8] Starting association polling Dec 13 14:33:29.976813 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:33:29.978336 sshd_keygen[1645]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:33:30.069578 systemd[1]: Finished sshd-keygen.service. Dec 13 14:33:30.073719 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:33:30.074924 systemd[1]: Starting issuegen.service... Dec 13 14:33:30.078503 systemd[1]: Started sshd@0-172.31.18.151:22-139.178.89.65:59218.service. Dec 13 14:33:30.156736 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:33:30.156970 systemd[1]: Finished issuegen.service. Dec 13 14:33:30.163224 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:33:30.190875 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:33:30.231737 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:33:30.234992 systemd[1]: Started getty@tty1.service. Dec 13 14:33:30.238821 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:33:30.241519 systemd[1]: Reached target getty.target. Dec 13 14:33:30.287637 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:33:30.355970 tar[1641]: linux-amd64/LICENSE Dec 13 14:33:30.355970 tar[1641]: linux-amd64/README.md Dec 13 14:33:30.363079 systemd[1]: Finished prepare-helm.service. Dec 13 14:33:30.384666 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:33:30.399523 locksmithd[1693]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:33:30.428885 sshd[1836]: Accepted publickey for core from 139.178.89.65 port 59218 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:30.432032 sshd[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:30.447054 systemd[1]: Created slice user-500.slice. Dec 13 14:33:30.449896 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:33:30.460466 systemd-logind[1636]: New session 1 of user core. Dec 13 14:33:30.470057 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:33:30.473255 systemd[1]: Starting user@500.service... Dec 13 14:33:30.478697 (systemd)[1847]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:30.481186 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:33:30.577434 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:33:30.646727 systemd[1847]: Queued start job for default target default.target. Dec 13 14:33:30.648302 systemd[1847]: Reached target paths.target. Dec 13 14:33:30.648342 systemd[1847]: Reached target sockets.target. Dec 13 14:33:30.648361 systemd[1847]: Reached target timers.target. Dec 13 14:33:30.648408 systemd[1847]: Reached target basic.target. Dec 13 14:33:30.648472 systemd[1847]: Reached target default.target. Dec 13 14:33:30.648512 systemd[1847]: Startup finished in 158ms. Dec 13 14:33:30.648544 systemd[1]: Started user@500.service. Dec 13 14:33:30.651297 systemd[1]: Started session-1.scope. Dec 13 14:33:30.673966 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:33:30.684021 systemd[1]: Started kubelet.service. Dec 13 14:33:30.685529 systemd[1]: Reached target multi-user.target. Dec 13 14:33:30.690896 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:33:30.707128 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:33:30.707513 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:33:30.710762 systemd[1]: Startup finished in 966ms (kernel) + 7.886s (initrd) + 11.630s (userspace) = 20.483s. Dec 13 14:33:30.919913 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [OfflineService] Starting document processing engine... Dec 13 14:33:30.935294 systemd[1]: Started sshd@1-172.31.18.151:22-139.178.89.65:59220.service. Dec 13 14:33:31.016726 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:33:31.114470 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:33:31.125925 sshd[1860]: Accepted publickey for core from 139.178.89.65 port 59220 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:31.126622 sshd[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:31.137645 systemd[1]: Started session-2.scope. Dec 13 14:33:31.141006 systemd-logind[1636]: New session 2 of user core. Dec 13 14:33:31.211151 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [OfflineService] Starting message polling Dec 13 14:33:31.274722 sshd[1860]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:31.280272 systemd[1]: sshd@1-172.31.18.151:22-139.178.89.65:59220.service: Deactivated successfully. Dec 13 14:33:31.281284 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:33:31.282908 systemd-logind[1636]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:33:31.284765 systemd-logind[1636]: Removed session 2. Dec 13 14:33:31.307082 systemd[1]: Started sshd@2-172.31.18.151:22-139.178.89.65:59226.service. Dec 13 14:33:31.310046 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [OfflineService] Starting send replies to MDS Dec 13 14:33:31.408026 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:33:31.490520 sshd[1870]: Accepted publickey for core from 139.178.89.65 port 59226 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:31.492503 sshd[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:31.501624 systemd[1]: Started session-3.scope. Dec 13 14:33:31.502808 systemd-logind[1636]: New session 3 of user core. Dec 13 14:33:31.505882 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:33:31.603903 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:33:31.633043 sshd[1870]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:31.637112 systemd[1]: sshd@2-172.31.18.151:22-139.178.89.65:59226.service: Deactivated successfully. Dec 13 14:33:31.638063 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:33:31.638887 systemd-logind[1636]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:33:31.640058 systemd-logind[1636]: Removed session 3. Dec 13 14:33:31.662321 systemd[1]: Started sshd@3-172.31.18.151:22-139.178.89.65:59228.service. Dec 13 14:33:31.703070 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0c4a1b1681e7c2ef8, requestId: dffdfa64-ff1f-44a1-bc5a-d2c54c1add76 Dec 13 14:33:31.755774 kubelet[1856]: E1213 14:33:31.755671 1856 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:31.758629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:31.758755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:31.759073 systemd[1]: kubelet.service: Consumed 1.146s CPU time. Dec 13 14:33:31.804630 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessageGatewayService] listening reply. Dec 13 14:33:31.838891 sshd[1878]: Accepted publickey for core from 139.178.89.65 port 59228 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:31.842798 sshd[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:31.867739 systemd-logind[1636]: New session 4 of user core. Dec 13 14:33:31.868346 systemd[1]: Started session-4.scope. Dec 13 14:33:31.903320 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:33:32.002250 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:33:32.008283 sshd[1878]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:32.017865 systemd[1]: sshd@3-172.31.18.151:22-139.178.89.65:59228.service: Deactivated successfully. Dec 13 14:33:32.019108 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:33:32.020086 systemd-logind[1636]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:33:32.022467 systemd-logind[1636]: Removed session 4. Dec 13 14:33:32.038783 systemd[1]: Started sshd@4-172.31.18.151:22-139.178.89.65:59244.service. Dec 13 14:33:32.101559 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:33:32.202626 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:33:32.212706 sshd[1884]: Accepted publickey for core from 139.178.89.65 port 59244 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:32.217276 sshd[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:32.225795 systemd-logind[1636]: New session 5 of user core. Dec 13 14:33:32.227919 systemd[1]: Started session-5.scope. Dec 13 14:33:32.301802 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:33:32.378904 sudo[1887]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:33:32.379430 sudo[1887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:33:32.403189 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c4a1b1681e7c2ef8?role=subscribe&stream=input Dec 13 14:33:32.446395 systemd[1]: Starting docker.service... Dec 13 14:33:32.503063 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c4a1b1681e7c2ef8?role=subscribe&stream=input Dec 13 14:33:32.524246 env[1897]: time="2024-12-13T14:33:32.524127618Z" level=info msg="Starting up" Dec 13 14:33:32.526241 env[1897]: time="2024-12-13T14:33:32.526192908Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:33:32.526241 env[1897]: time="2024-12-13T14:33:32.526223781Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:33:32.526610 env[1897]: time="2024-12-13T14:33:32.526249343Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:33:32.526610 env[1897]: time="2024-12-13T14:33:32.526340174Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:33:32.530925 env[1897]: time="2024-12-13T14:33:32.530878762Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:33:32.530925 env[1897]: time="2024-12-13T14:33:32.530907602Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:33:32.531199 env[1897]: time="2024-12-13T14:33:32.530932241Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:33:32.531199 env[1897]: time="2024-12-13T14:33:32.530982278Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:33:32.544708 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1276138530-merged.mount: Deactivated successfully. Dec 13 14:33:32.603230 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:33:32.703343 amazon-ssm-agent[1624]: 2024-12-13 14:33:29 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:33:36.078100 env[1897]: time="2024-12-13T14:33:36.078051812Z" level=info msg="Loading containers: start." Dec 13 14:33:36.393410 kernel: Initializing XFRM netlink socket Dec 13 14:33:36.520759 env[1897]: time="2024-12-13T14:33:36.520710715Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:33:36.521981 (udev-worker)[1911]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:36.715514 systemd-networkd[1370]: docker0: Link UP Dec 13 14:33:36.734960 env[1897]: time="2024-12-13T14:33:36.734892187Z" level=info msg="Loading containers: done." Dec 13 14:33:36.757299 env[1897]: time="2024-12-13T14:33:36.757238164Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:33:36.757565 env[1897]: time="2024-12-13T14:33:36.757517122Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:33:36.757680 env[1897]: time="2024-12-13T14:33:36.757652870Z" level=info msg="Daemon has completed initialization" Dec 13 14:33:36.824312 systemd[1]: Started docker.service. Dec 13 14:33:36.845680 env[1897]: time="2024-12-13T14:33:36.845595209Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:33:38.609336 env[1647]: time="2024-12-13T14:33:38.604733200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 14:33:39.334215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299086748.mount: Deactivated successfully. Dec 13 14:33:42.010413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:33:42.010813 systemd[1]: Stopped kubelet.service. Dec 13 14:33:42.010873 systemd[1]: kubelet.service: Consumed 1.146s CPU time. Dec 13 14:33:42.014965 systemd[1]: Starting kubelet.service... Dec 13 14:33:42.602897 systemd[1]: Started kubelet.service. Dec 13 14:33:42.713655 kubelet[2033]: E1213 14:33:42.713580 2033 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:42.720140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:42.721577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:43.282809 env[1647]: time="2024-12-13T14:33:43.282757363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:43.286304 env[1647]: time="2024-12-13T14:33:43.286259220Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:43.289201 env[1647]: time="2024-12-13T14:33:43.289156364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:43.292556 env[1647]: time="2024-12-13T14:33:43.292480155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:43.293906 env[1647]: time="2024-12-13T14:33:43.293856685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 14:33:43.307561 env[1647]: time="2024-12-13T14:33:43.307520121Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 14:33:46.670020 env[1647]: time="2024-12-13T14:33:46.669938673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:46.680690 env[1647]: time="2024-12-13T14:33:46.680637336Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:46.683815 env[1647]: time="2024-12-13T14:33:46.683769046Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:46.686578 env[1647]: time="2024-12-13T14:33:46.686507049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:46.687726 env[1647]: time="2024-12-13T14:33:46.687687124Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 14:33:46.711794 env[1647]: time="2024-12-13T14:33:46.711751161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 14:33:47.868954 amazon-ssm-agent[1624]: 2024-12-13 14:33:47 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:33:49.158270 env[1647]: time="2024-12-13T14:33:49.158186197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:49.162034 env[1647]: time="2024-12-13T14:33:49.161913438Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:49.165107 env[1647]: time="2024-12-13T14:33:49.165060851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:49.167931 env[1647]: time="2024-12-13T14:33:49.167889687Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:49.169223 env[1647]: time="2024-12-13T14:33:49.169038129Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 14:33:49.186750 env[1647]: time="2024-12-13T14:33:49.186680087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:33:50.954794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493692502.mount: Deactivated successfully. Dec 13 14:33:51.960024 env[1647]: time="2024-12-13T14:33:51.959966746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:51.964587 env[1647]: time="2024-12-13T14:33:51.964537503Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:51.966957 env[1647]: time="2024-12-13T14:33:51.966921574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:51.969139 env[1647]: time="2024-12-13T14:33:51.969066860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:51.970548 env[1647]: time="2024-12-13T14:33:51.970475432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 14:33:51.988159 env[1647]: time="2024-12-13T14:33:51.988121335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:33:52.616538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108609131.mount: Deactivated successfully. Dec 13 14:33:52.973147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:33:52.973413 systemd[1]: Stopped kubelet.service. Dec 13 14:33:52.975492 systemd[1]: Starting kubelet.service... Dec 13 14:33:53.609230 systemd[1]: Started kubelet.service. Dec 13 14:33:53.690868 kubelet[2064]: E1213 14:33:53.690797 2064 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:53.693467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:53.693647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:54.598037 env[1647]: time="2024-12-13T14:33:54.597983280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:54.639441 env[1647]: time="2024-12-13T14:33:54.639391546Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:54.643966 env[1647]: time="2024-12-13T14:33:54.643916339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:54.646792 env[1647]: time="2024-12-13T14:33:54.646732692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:54.647820 env[1647]: time="2024-12-13T14:33:54.647782907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:33:54.661765 env[1647]: time="2024-12-13T14:33:54.661721744Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:33:55.213332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613149820.mount: Deactivated successfully. Dec 13 14:33:55.224527 env[1647]: time="2024-12-13T14:33:55.224478007Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:55.228902 env[1647]: time="2024-12-13T14:33:55.228856257Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:55.231683 env[1647]: time="2024-12-13T14:33:55.231514295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:55.235449 env[1647]: time="2024-12-13T14:33:55.235402119Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:55.236571 env[1647]: time="2024-12-13T14:33:55.236530823Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:33:55.254261 env[1647]: time="2024-12-13T14:33:55.254221991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 14:33:55.905699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2148154843.mount: Deactivated successfully. Dec 13 14:33:59.075205 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:33:59.857994 env[1647]: time="2024-12-13T14:33:59.857941186Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:59.892034 env[1647]: time="2024-12-13T14:33:59.891982810Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:59.926691 env[1647]: time="2024-12-13T14:33:59.926613385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:59.983106 env[1647]: time="2024-12-13T14:33:59.981768774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:59.983627 env[1647]: time="2024-12-13T14:33:59.983587591Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 14:34:03.894484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:34:03.894751 systemd[1]: Stopped kubelet.service. Dec 13 14:34:03.898614 systemd[1]: Starting kubelet.service... Dec 13 14:34:06.199142 systemd[1]: Started kubelet.service. Dec 13 14:34:06.296823 kubelet[2148]: E1213 14:34:06.296774 2148 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:34:06.300034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:34:06.300209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:34:06.543250 systemd[1]: Stopped kubelet.service. Dec 13 14:34:06.547429 systemd[1]: Starting kubelet.service... Dec 13 14:34:06.577017 systemd[1]: Reloading. Dec 13 14:34:06.691984 /usr/lib/systemd/system-generators/torcx-generator[2179]: time="2024-12-13T14:34:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:34:06.692019 /usr/lib/systemd/system-generators/torcx-generator[2179]: time="2024-12-13T14:34:06Z" level=info msg="torcx already run" Dec 13 14:34:06.811408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:34:06.811430 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:34:06.835038 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:34:06.978567 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:34:06.978675 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:34:06.978996 systemd[1]: Stopped kubelet.service. Dec 13 14:34:06.981605 systemd[1]: Starting kubelet.service... Dec 13 14:34:07.363734 systemd[1]: Started kubelet.service. Dec 13 14:34:07.423118 kubelet[2236]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:34:07.423118 kubelet[2236]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:34:07.423118 kubelet[2236]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:34:07.424685 kubelet[2236]: I1213 14:34:07.424651 2236 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:34:08.195027 kubelet[2236]: I1213 14:34:08.194985 2236 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:34:08.195194 kubelet[2236]: I1213 14:34:08.195054 2236 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:34:08.197254 kubelet[2236]: I1213 14:34:08.197212 2236 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:34:08.238265 kubelet[2236]: I1213 14:34:08.238223 2236 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:34:08.239363 kubelet[2236]: E1213 14:34:08.239331 2236 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:08.254535 kubelet[2236]: I1213 14:34:08.254507 2236 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:34:08.259457 kubelet[2236]: I1213 14:34:08.259390 2236 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:34:08.259695 kubelet[2236]: I1213 14:34:08.259452 2236 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-151","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:34:08.259838 kubelet[2236]: I1213 14:34:08.259714 2236 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:34:08.259838 kubelet[2236]: I1213 14:34:08.259730 2236 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:34:08.261037 kubelet[2236]: I1213 14:34:08.261009 2236 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:34:08.263525 kubelet[2236]: I1213 14:34:08.263499 2236 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:34:08.263525 kubelet[2236]: I1213 14:34:08.263528 2236 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:34:08.263738 kubelet[2236]: I1213 14:34:08.263557 2236 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:34:08.263738 kubelet[2236]: I1213 14:34:08.263574 2236 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:34:08.275120 kubelet[2236]: W1213 14:34:08.274727 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:08.275120 kubelet[2236]: E1213 14:34:08.274873 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:08.275120 kubelet[2236]: W1213 14:34:08.274965 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-151&limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:08.275120 kubelet[2236]: E1213 14:34:08.275011 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-151&limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:08.275457 kubelet[2236]: I1213 14:34:08.275249 2236 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:34:08.281883 kubelet[2236]: I1213 14:34:08.281846 2236 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:34:08.282017 kubelet[2236]: W1213 14:34:08.281932 2236 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:34:08.283244 kubelet[2236]: I1213 14:34:08.283221 2236 server.go:1264] "Started kubelet" Dec 13 14:34:08.284766 kubelet[2236]: I1213 14:34:08.284726 2236 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:34:08.286037 kubelet[2236]: I1213 14:34:08.286014 2236 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:34:08.297130 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:34:08.298690 kubelet[2236]: I1213 14:34:08.297741 2236 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:34:08.301100 kubelet[2236]: I1213 14:34:08.301037 2236 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:34:08.301308 kubelet[2236]: I1213 14:34:08.301286 2236 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:34:08.302363 kubelet[2236]: E1213 14:34:08.301696 2236 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.151:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.151:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-151.1810c32b88c3e61f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-151,UID:ip-172-31-18-151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-151,},FirstTimestamp:2024-12-13 14:34:08.283190815 +0000 UTC m=+0.911663062,LastTimestamp:2024-12-13 14:34:08.283190815 +0000 UTC m=+0.911663062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-151,}" Dec 13 14:34:08.306987 kubelet[2236]: E1213 14:34:08.306346 2236 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-151\" not found" Dec 13 14:34:08.306987 kubelet[2236]: I1213 14:34:08.306414 2236 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:34:08.306987 kubelet[2236]: I1213 14:34:08.306532 2236 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:34:08.306987 kubelet[2236]: I1213 14:34:08.306594 2236 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:34:08.307292 kubelet[2236]: W1213 14:34:08.307013 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:08.307292 kubelet[2236]: E1213 14:34:08.307066 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:08.307916 kubelet[2236]: E1213 14:34:08.307692 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-151?timeout=10s\": dial tcp 172.31.18.151:6443: connect: connection refused" interval="200ms" Dec 13 14:34:08.308637 kubelet[2236]: E1213 14:34:08.308609 2236 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:34:08.308734 kubelet[2236]: I1213 14:34:08.308703 2236 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:34:08.308855 kubelet[2236]: I1213 14:34:08.308836 2236 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:34:08.310096 kubelet[2236]: I1213 14:34:08.310075 2236 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:34:08.326997 kubelet[2236]: I1213 14:34:08.326974 2236 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:34:08.327191 kubelet[2236]: I1213 14:34:08.327178 2236 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:34:08.327279 kubelet[2236]: I1213 14:34:08.327269 2236 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:34:08.335117 kubelet[2236]: I1213 14:34:08.335086 2236 policy_none.go:49] "None policy: Start" Dec 13 14:34:08.337943 kubelet[2236]: I1213 14:34:08.337919 2236 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:34:08.338423 kubelet[2236]: I1213 14:34:08.338408 2236 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:34:08.353255 kubelet[2236]: I1213 14:34:08.353204 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:34:08.355001 kubelet[2236]: I1213 14:34:08.354976 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:34:08.355163 kubelet[2236]: I1213 14:34:08.355152 2236 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:34:08.355270 kubelet[2236]: I1213 14:34:08.355260 2236 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:34:08.355586 kubelet[2236]: E1213 14:34:08.355559 2236 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:34:08.361144 systemd[1]: Created slice kubepods.slice. Dec 13 14:34:08.366041 kubelet[2236]: W1213 14:34:08.366007 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:08.366226 kubelet[2236]: E1213 14:34:08.366211 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:08.368545 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:34:08.372204 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:34:08.380532 kubelet[2236]: I1213 14:34:08.380506 2236 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:34:08.380958 kubelet[2236]: I1213 14:34:08.380922 2236 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:34:08.381133 kubelet[2236]: I1213 14:34:08.381125 2236 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:34:08.387456 kubelet[2236]: E1213 14:34:08.387431 2236 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-151\" not found" Dec 13 14:34:08.414449 kubelet[2236]: I1213 14:34:08.414415 2236 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-151" Dec 13 14:34:08.415537 kubelet[2236]: E1213 14:34:08.415391 2236 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.151:6443/api/v1/nodes\": dial tcp 172.31.18.151:6443: connect: connection refused" node="ip-172-31-18-151" Dec 13 14:34:08.457729 kubelet[2236]: I1213 14:34:08.456746 2236 topology_manager.go:215] "Topology Admit Handler" podUID="fd67c389c350d94150e524c75b270e11" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-151" Dec 13 14:34:08.461558 kubelet[2236]: I1213 14:34:08.461416 2236 topology_manager.go:215] "Topology Admit Handler" podUID="f8504dd8f92ad6bb6798e7a227a8361f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:08.470750 kubelet[2236]: I1213 14:34:08.470712 2236 topology_manager.go:215] "Topology Admit Handler" podUID="6fc52701e9bc362e898f1b86472cf221" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-151" Dec 13 14:34:08.492901 systemd[1]: Created slice kubepods-burstable-podfd67c389c350d94150e524c75b270e11.slice. Dec 13 14:34:08.504291 systemd[1]: Created slice kubepods-burstable-podf8504dd8f92ad6bb6798e7a227a8361f.slice. Dec 13 14:34:08.507812 kubelet[2236]: I1213 14:34:08.507783 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:08.508010 kubelet[2236]: I1213 14:34:08.507993 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:08.508114 kubelet[2236]: I1213 14:34:08.508099 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fc52701e9bc362e898f1b86472cf221-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-151\" (UID: \"6fc52701e9bc362e898f1b86472cf221\") " pod="kube-system/kube-scheduler-ip-172-31-18-151" Dec 13 14:34:08.508209 kubelet[2236]: I1213 14:34:08.508195 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:08.508319 kubelet[2236]: I1213 14:34:08.508304 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd67c389c350d94150e524c75b270e11-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-151\" (UID: \"fd67c389c350d94150e524c75b270e11\") " pod="kube-system/kube-apiserver-ip-172-31-18-151" Dec 13 14:34:08.508445 kubelet[2236]: I1213 14:34:08.508420 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd67c389c350d94150e524c75b270e11-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-151\" (UID: \"fd67c389c350d94150e524c75b270e11\") " pod="kube-system/kube-apiserver-ip-172-31-18-151" Dec 13 14:34:08.508445 kubelet[2236]: I1213 14:34:08.508450 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:08.508561 kubelet[2236]: I1213 14:34:08.508474 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:08.508561 kubelet[2236]: I1213 14:34:08.508495 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd67c389c350d94150e524c75b270e11-ca-certs\") pod \"kube-apiserver-ip-172-31-18-151\" (UID: \"fd67c389c350d94150e524c75b270e11\") " pod="kube-system/kube-apiserver-ip-172-31-18-151" Dec 13 14:34:08.508561 kubelet[2236]: E1213 14:34:08.508193 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-151?timeout=10s\": dial tcp 172.31.18.151:6443: connect: connection refused" interval="400ms" Dec 13 14:34:08.517758 systemd[1]: Created slice kubepods-burstable-pod6fc52701e9bc362e898f1b86472cf221.slice. Dec 13 14:34:08.617772 kubelet[2236]: I1213 14:34:08.617731 2236 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-151" Dec 13 14:34:08.618246 kubelet[2236]: E1213 14:34:08.618203 2236 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.151:6443/api/v1/nodes\": dial tcp 172.31.18.151:6443: connect: connection refused" node="ip-172-31-18-151" Dec 13 14:34:08.803249 env[1647]: time="2024-12-13T14:34:08.803121931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-151,Uid:fd67c389c350d94150e524c75b270e11,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:08.819021 env[1647]: time="2024-12-13T14:34:08.818829167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-151,Uid:f8504dd8f92ad6bb6798e7a227a8361f,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:08.822846 env[1647]: time="2024-12-13T14:34:08.822563083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-151,Uid:6fc52701e9bc362e898f1b86472cf221,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:08.910073 kubelet[2236]: E1213 14:34:08.909900 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-151?timeout=10s\": dial tcp 172.31.18.151:6443: connect: connection refused" interval="800ms" Dec 13 14:34:09.020830 kubelet[2236]: I1213 14:34:09.020799 2236 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-151" Dec 13 14:34:09.021156 kubelet[2236]: E1213 14:34:09.021127 2236 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.151:6443/api/v1/nodes\": dial tcp 172.31.18.151:6443: connect: connection refused" node="ip-172-31-18-151" Dec 13 14:34:09.320851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867273096.mount: Deactivated successfully. Dec 13 14:34:09.337725 env[1647]: time="2024-12-13T14:34:09.337673143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.339710 env[1647]: time="2024-12-13T14:34:09.339661458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.344938 env[1647]: time="2024-12-13T14:34:09.344889530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.346985 env[1647]: time="2024-12-13T14:34:09.346938876Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.348624 env[1647]: time="2024-12-13T14:34:09.348585655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.350928 env[1647]: time="2024-12-13T14:34:09.350889326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.355137 env[1647]: time="2024-12-13T14:34:09.355090507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.357824 env[1647]: time="2024-12-13T14:34:09.357782202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.359896 env[1647]: time="2024-12-13T14:34:09.359851538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.361770 env[1647]: time="2024-12-13T14:34:09.361728340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.367858 env[1647]: time="2024-12-13T14:34:09.367796761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.376190 env[1647]: time="2024-12-13T14:34:09.376141323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:09.418016 env[1647]: time="2024-12-13T14:34:09.417945864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:09.418016 env[1647]: time="2024-12-13T14:34:09.417988048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:09.418255 env[1647]: time="2024-12-13T14:34:09.418211521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:09.418569 env[1647]: time="2024-12-13T14:34:09.418524522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/344886c71981a1cbf879ae7ad55947907e325ea18a7d3108305a7aafa4c8e531 pid=2273 runtime=io.containerd.runc.v2 Dec 13 14:34:09.443233 kubelet[2236]: W1213 14:34:09.443153 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:09.443233 kubelet[2236]: E1213 14:34:09.443239 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:09.443882 env[1647]: time="2024-12-13T14:34:09.443792817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:09.443982 env[1647]: time="2024-12-13T14:34:09.443909808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:09.443982 env[1647]: time="2024-12-13T14:34:09.443946671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:09.444243 env[1647]: time="2024-12-13T14:34:09.444193802Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7094baf0a1acc3bdf54279be67c0e41d9abb776eeaacc25d93ee462651966636 pid=2289 runtime=io.containerd.runc.v2 Dec 13 14:34:09.450991 env[1647]: time="2024-12-13T14:34:09.450928362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:09.451202 env[1647]: time="2024-12-13T14:34:09.451173496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:09.451323 env[1647]: time="2024-12-13T14:34:09.451298942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:09.451663 env[1647]: time="2024-12-13T14:34:09.451627468Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41756fc694d20c88790193474d2ccc25556b3abd413044c37f54a1a329853689 pid=2312 runtime=io.containerd.runc.v2 Dec 13 14:34:09.460426 systemd[1]: Started cri-containerd-344886c71981a1cbf879ae7ad55947907e325ea18a7d3108305a7aafa4c8e531.scope. Dec 13 14:34:09.487006 systemd[1]: Started cri-containerd-41756fc694d20c88790193474d2ccc25556b3abd413044c37f54a1a329853689.scope. Dec 13 14:34:09.509568 systemd[1]: Started cri-containerd-7094baf0a1acc3bdf54279be67c0e41d9abb776eeaacc25d93ee462651966636.scope. Dec 13 14:34:09.564139 kubelet[2236]: W1213 14:34:09.564014 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-151&limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:09.564139 kubelet[2236]: E1213 14:34:09.564090 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-151&limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:09.595856 kubelet[2236]: W1213 14:34:09.595674 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:09.595856 kubelet[2236]: E1213 14:34:09.595750 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:09.655343 env[1647]: time="2024-12-13T14:34:09.655293200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-151,Uid:f8504dd8f92ad6bb6798e7a227a8361f,Namespace:kube-system,Attempt:0,} returns sandbox id \"344886c71981a1cbf879ae7ad55947907e325ea18a7d3108305a7aafa4c8e531\"" Dec 13 14:34:09.665202 env[1647]: time="2024-12-13T14:34:09.665154968Z" level=info msg="CreateContainer within sandbox \"344886c71981a1cbf879ae7ad55947907e325ea18a7d3108305a7aafa4c8e531\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:34:09.669834 env[1647]: time="2024-12-13T14:34:09.669789670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-151,Uid:fd67c389c350d94150e524c75b270e11,Namespace:kube-system,Attempt:0,} returns sandbox id \"7094baf0a1acc3bdf54279be67c0e41d9abb776eeaacc25d93ee462651966636\"" Dec 13 14:34:09.670560 env[1647]: time="2024-12-13T14:34:09.670517600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-151,Uid:6fc52701e9bc362e898f1b86472cf221,Namespace:kube-system,Attempt:0,} returns sandbox id \"41756fc694d20c88790193474d2ccc25556b3abd413044c37f54a1a329853689\"" Dec 13 14:34:09.674884 env[1647]: time="2024-12-13T14:34:09.674820780Z" level=info msg="CreateContainer within sandbox \"41756fc694d20c88790193474d2ccc25556b3abd413044c37f54a1a329853689\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:34:09.675318 env[1647]: time="2024-12-13T14:34:09.675281849Z" level=info msg="CreateContainer within sandbox \"7094baf0a1acc3bdf54279be67c0e41d9abb776eeaacc25d93ee462651966636\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:34:09.711127 kubelet[2236]: E1213 14:34:09.711070 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-151?timeout=10s\": dial tcp 172.31.18.151:6443: connect: connection refused" interval="1.6s" Dec 13 14:34:09.714016 env[1647]: time="2024-12-13T14:34:09.713956934Z" level=info msg="CreateContainer within sandbox \"7094baf0a1acc3bdf54279be67c0e41d9abb776eeaacc25d93ee462651966636\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"200391b37374f32923db4ec43a205d7bc1fcd575b320ec2db05d0db1dc54b451\"" Dec 13 14:34:09.715070 env[1647]: time="2024-12-13T14:34:09.715037866Z" level=info msg="StartContainer for \"200391b37374f32923db4ec43a205d7bc1fcd575b320ec2db05d0db1dc54b451\"" Dec 13 14:34:09.717906 env[1647]: time="2024-12-13T14:34:09.717856640Z" level=info msg="CreateContainer within sandbox \"344886c71981a1cbf879ae7ad55947907e325ea18a7d3108305a7aafa4c8e531\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8\"" Dec 13 14:34:09.718572 env[1647]: time="2024-12-13T14:34:09.718532791Z" level=info msg="StartContainer for \"350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8\"" Dec 13 14:34:09.725644 env[1647]: time="2024-12-13T14:34:09.725590434Z" level=info msg="CreateContainer within sandbox \"41756fc694d20c88790193474d2ccc25556b3abd413044c37f54a1a329853689\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce\"" Dec 13 14:34:09.726210 env[1647]: time="2024-12-13T14:34:09.726177892Z" level=info msg="StartContainer for \"614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce\"" Dec 13 14:34:09.746818 systemd[1]: Started cri-containerd-200391b37374f32923db4ec43a205d7bc1fcd575b320ec2db05d0db1dc54b451.scope. Dec 13 14:34:09.772083 systemd[1]: Started cri-containerd-350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8.scope. Dec 13 14:34:09.803934 systemd[1]: Started cri-containerd-614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce.scope. Dec 13 14:34:09.825450 kubelet[2236]: I1213 14:34:09.824887 2236 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-151" Dec 13 14:34:09.825450 kubelet[2236]: E1213 14:34:09.825367 2236 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.151:6443/api/v1/nodes\": dial tcp 172.31.18.151:6443: connect: connection refused" node="ip-172-31-18-151" Dec 13 14:34:09.848461 kubelet[2236]: W1213 14:34:09.848221 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:09.848461 kubelet[2236]: E1213 14:34:09.848266 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:09.898639 env[1647]: time="2024-12-13T14:34:09.898579887Z" level=info msg="StartContainer for \"200391b37374f32923db4ec43a205d7bc1fcd575b320ec2db05d0db1dc54b451\" returns successfully" Dec 13 14:34:09.947917 env[1647]: time="2024-12-13T14:34:09.947860098Z" level=info msg="StartContainer for \"350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8\" returns successfully" Dec 13 14:34:09.964462 env[1647]: time="2024-12-13T14:34:09.964399071Z" level=info msg="StartContainer for \"614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce\" returns successfully" Dec 13 14:34:10.273996 kubelet[2236]: E1213 14:34:10.273870 2236 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.151:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.151:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-151.1810c32b88c3e61f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-151,UID:ip-172-31-18-151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-151,},FirstTimestamp:2024-12-13 14:34:08.283190815 +0000 UTC m=+0.911663062,LastTimestamp:2024-12-13 14:34:08.283190815 +0000 UTC m=+0.911663062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-151,}" Dec 13 14:34:10.285339 kubelet[2236]: E1213 14:34:10.285298 2236 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:11.254668 kubelet[2236]: W1213 14:34:11.254592 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:11.255231 kubelet[2236]: E1213 14:34:11.255214 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.151:6443: connect: connection refused Dec 13 14:34:11.312527 kubelet[2236]: E1213 14:34:11.312480 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-151?timeout=10s\": dial tcp 172.31.18.151:6443: connect: connection refused" interval="3.2s" Dec 13 14:34:11.427460 kubelet[2236]: I1213 14:34:11.427430 2236 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-151" Dec 13 14:34:11.428181 kubelet[2236]: E1213 14:34:11.428152 2236 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.151:6443/api/v1/nodes\": dial tcp 172.31.18.151:6443: connect: connection refused" node="ip-172-31-18-151" Dec 13 14:34:13.230973 update_engine[1637]: I1213 14:34:13.230448 1637 update_attempter.cc:509] Updating boot flags... Dec 13 14:34:14.273426 kubelet[2236]: E1213 14:34:14.270309 2236 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-151" not found Dec 13 14:34:14.275759 kubelet[2236]: I1213 14:34:14.273468 2236 apiserver.go:52] "Watching apiserver" Dec 13 14:34:14.307160 kubelet[2236]: I1213 14:34:14.307129 2236 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:34:14.518452 kubelet[2236]: E1213 14:34:14.518415 2236 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-151\" not found" node="ip-172-31-18-151" Dec 13 14:34:14.631034 kubelet[2236]: I1213 14:34:14.630856 2236 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-151" Dec 13 14:34:14.635153 kubelet[2236]: E1213 14:34:14.635116 2236 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-151" not found Dec 13 14:34:14.640966 kubelet[2236]: I1213 14:34:14.640897 2236 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-151" Dec 13 14:34:16.341208 systemd[1]: Reloading. Dec 13 14:34:16.497200 /usr/lib/systemd/system-generators/torcx-generator[2788]: time="2024-12-13T14:34:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:34:16.500564 /usr/lib/systemd/system-generators/torcx-generator[2788]: time="2024-12-13T14:34:16Z" level=info msg="torcx already run" Dec 13 14:34:16.677691 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:34:16.677714 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:34:16.707231 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:34:16.931356 systemd[1]: Stopping kubelet.service... Dec 13 14:34:16.948406 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:34:16.948643 systemd[1]: Stopped kubelet.service. Dec 13 14:34:16.948712 systemd[1]: kubelet.service: Consumed 1.177s CPU time. Dec 13 14:34:16.951724 systemd[1]: Starting kubelet.service... Dec 13 14:34:17.896253 amazon-ssm-agent[1624]: 2024-12-13 14:34:17 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:34:19.068560 systemd[1]: Started kubelet.service. Dec 13 14:34:19.190398 kubelet[2845]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:34:19.190398 kubelet[2845]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:34:19.190398 kubelet[2845]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:34:19.190398 kubelet[2845]: I1213 14:34:19.188061 2845 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:34:19.196451 kubelet[2845]: I1213 14:34:19.196420 2845 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:34:19.196451 kubelet[2845]: I1213 14:34:19.196449 2845 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:34:19.196887 kubelet[2845]: I1213 14:34:19.196868 2845 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:34:19.199802 kubelet[2845]: I1213 14:34:19.199672 2845 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:34:19.204725 kubelet[2845]: I1213 14:34:19.204682 2845 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:34:19.221846 kubelet[2845]: I1213 14:34:19.221802 2845 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:34:19.223939 kubelet[2845]: I1213 14:34:19.223886 2845 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:34:19.225541 kubelet[2845]: I1213 14:34:19.223938 2845 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-151","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:34:19.225541 kubelet[2845]: I1213 14:34:19.225562 2845 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:34:19.225541 kubelet[2845]: I1213 14:34:19.225582 2845 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:34:19.226254 kubelet[2845]: I1213 14:34:19.225647 2845 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:34:19.226254 kubelet[2845]: I1213 14:34:19.225892 2845 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:34:19.226254 kubelet[2845]: I1213 14:34:19.225917 2845 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:34:19.226254 kubelet[2845]: I1213 14:34:19.226151 2845 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:34:19.226254 kubelet[2845]: I1213 14:34:19.226177 2845 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:34:19.230843 kubelet[2845]: I1213 14:34:19.230435 2845 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:34:19.230843 kubelet[2845]: I1213 14:34:19.230660 2845 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:34:19.232108 kubelet[2845]: I1213 14:34:19.231317 2845 server.go:1264] "Started kubelet" Dec 13 14:34:19.231629 sudo[2858]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:34:19.231951 sudo[2858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:34:19.234527 kubelet[2845]: I1213 14:34:19.234296 2845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:34:19.250509 kubelet[2845]: I1213 14:34:19.250449 2845 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:34:19.252316 kubelet[2845]: I1213 14:34:19.252288 2845 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:34:19.253236 kubelet[2845]: I1213 14:34:19.253205 2845 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:34:19.254033 kubelet[2845]: I1213 14:34:19.254004 2845 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:34:19.254541 kubelet[2845]: I1213 14:34:19.254518 2845 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:34:19.255268 kubelet[2845]: I1213 14:34:19.255130 2845 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:34:19.255608 kubelet[2845]: I1213 14:34:19.255592 2845 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:34:19.256715 kubelet[2845]: I1213 14:34:19.256696 2845 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:34:19.256971 kubelet[2845]: I1213 14:34:19.256948 2845 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:34:19.312966 kubelet[2845]: I1213 14:34:19.312921 2845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:34:19.318593 kubelet[2845]: I1213 14:34:19.318561 2845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:34:19.318808 kubelet[2845]: I1213 14:34:19.318796 2845 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:34:19.318981 kubelet[2845]: I1213 14:34:19.318968 2845 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:34:19.319127 kubelet[2845]: E1213 14:34:19.319107 2845 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:34:19.324893 kubelet[2845]: I1213 14:34:19.322975 2845 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:34:19.348224 kubelet[2845]: E1213 14:34:19.348189 2845 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:34:19.394803 kubelet[2845]: I1213 14:34:19.394668 2845 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-151" Dec 13 14:34:19.417469 kubelet[2845]: I1213 14:34:19.416649 2845 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-151" Dec 13 14:34:19.417469 kubelet[2845]: I1213 14:34:19.416735 2845 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-151" Dec 13 14:34:19.419407 kubelet[2845]: E1213 14:34:19.419352 2845 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:34:19.454389 kubelet[2845]: I1213 14:34:19.454342 2845 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:34:19.454389 kubelet[2845]: I1213 14:34:19.454385 2845 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:34:19.454599 kubelet[2845]: I1213 14:34:19.454407 2845 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:34:19.454599 kubelet[2845]: I1213 14:34:19.454589 2845 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:34:19.454684 kubelet[2845]: I1213 14:34:19.454603 2845 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:34:19.454684 kubelet[2845]: I1213 14:34:19.454630 2845 policy_none.go:49] "None policy: Start" Dec 13 14:34:19.455589 kubelet[2845]: I1213 14:34:19.455566 2845 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:34:19.455740 kubelet[2845]: I1213 14:34:19.455594 2845 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:34:19.455976 kubelet[2845]: I1213 14:34:19.455844 2845 state_mem.go:75] "Updated machine memory state" Dec 13 14:34:19.462122 kubelet[2845]: I1213 14:34:19.462097 2845 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:34:19.462432 kubelet[2845]: I1213 14:34:19.462391 2845 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:34:19.465338 kubelet[2845]: I1213 14:34:19.465302 2845 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:34:19.619859 kubelet[2845]: I1213 14:34:19.619736 2845 topology_manager.go:215] "Topology Admit Handler" podUID="fd67c389c350d94150e524c75b270e11" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-151" Dec 13 14:34:19.620137 kubelet[2845]: I1213 14:34:19.620119 2845 topology_manager.go:215] "Topology Admit Handler" podUID="f8504dd8f92ad6bb6798e7a227a8361f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:19.620322 kubelet[2845]: I1213 14:34:19.620298 2845 topology_manager.go:215] "Topology Admit Handler" podUID="6fc52701e9bc362e898f1b86472cf221" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-151" Dec 13 14:34:19.629533 kubelet[2845]: E1213 14:34:19.629499 2845 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-18-151\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-151" Dec 13 14:34:19.667680 kubelet[2845]: I1213 14:34:19.667641 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd67c389c350d94150e524c75b270e11-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-151\" (UID: \"fd67c389c350d94150e524c75b270e11\") " pod="kube-system/kube-apiserver-ip-172-31-18-151" Dec 13 14:34:19.667909 kubelet[2845]: I1213 14:34:19.667885 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:19.668093 kubelet[2845]: I1213 14:34:19.668075 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:19.668203 kubelet[2845]: I1213 14:34:19.668190 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:19.668302 kubelet[2845]: I1213 14:34:19.668275 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd67c389c350d94150e524c75b270e11-ca-certs\") pod \"kube-apiserver-ip-172-31-18-151\" (UID: \"fd67c389c350d94150e524c75b270e11\") " pod="kube-system/kube-apiserver-ip-172-31-18-151" Dec 13 14:34:19.668371 kubelet[2845]: I1213 14:34:19.668322 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:19.668371 kubelet[2845]: I1213 14:34:19.668353 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8504dd8f92ad6bb6798e7a227a8361f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-151\" (UID: \"f8504dd8f92ad6bb6798e7a227a8361f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-151" Dec 13 14:34:19.668485 kubelet[2845]: I1213 14:34:19.668398 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fc52701e9bc362e898f1b86472cf221-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-151\" (UID: \"6fc52701e9bc362e898f1b86472cf221\") " pod="kube-system/kube-scheduler-ip-172-31-18-151" Dec 13 14:34:19.668485 kubelet[2845]: I1213 14:34:19.668433 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd67c389c350d94150e524c75b270e11-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-151\" (UID: \"fd67c389c350d94150e524c75b270e11\") " pod="kube-system/kube-apiserver-ip-172-31-18-151" Dec 13 14:34:20.246867 kubelet[2845]: I1213 14:34:20.246796 2845 apiserver.go:52] "Watching apiserver" Dec 13 14:34:20.254586 kubelet[2845]: I1213 14:34:20.254553 2845 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:34:20.386586 sudo[2858]: pam_unix(sudo:session): session closed for user root Dec 13 14:34:20.446186 kubelet[2845]: E1213 14:34:20.446154 2845 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-151\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-151" Dec 13 14:34:20.494945 kubelet[2845]: I1213 14:34:20.494831 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-151" podStartSLOduration=5.494773141 podStartE2EDuration="5.494773141s" podCreationTimestamp="2024-12-13 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:34:20.476801306 +0000 UTC m=+1.387098756" watchObservedRunningTime="2024-12-13 14:34:20.494773141 +0000 UTC m=+1.405070583" Dec 13 14:34:20.513836 kubelet[2845]: I1213 14:34:20.513519 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-151" podStartSLOduration=1.5134994389999998 podStartE2EDuration="1.513499439s" podCreationTimestamp="2024-12-13 14:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:34:20.496652313 +0000 UTC m=+1.406949764" watchObservedRunningTime="2024-12-13 14:34:20.513499439 +0000 UTC m=+1.423796886" Dec 13 14:34:20.551773 kubelet[2845]: I1213 14:34:20.551519 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-151" podStartSLOduration=1.551326091 podStartE2EDuration="1.551326091s" podCreationTimestamp="2024-12-13 14:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:34:20.515776449 +0000 UTC m=+1.426073898" watchObservedRunningTime="2024-12-13 14:34:20.551326091 +0000 UTC m=+1.461623538" Dec 13 14:34:23.211290 sudo[1887]: pam_unix(sudo:session): session closed for user root Dec 13 14:34:23.234968 sshd[1884]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:23.238561 systemd[1]: sshd@4-172.31.18.151:22-139.178.89.65:59244.service: Deactivated successfully. Dec 13 14:34:23.240360 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:34:23.240941 systemd[1]: session-5.scope: Consumed 5.523s CPU time. Dec 13 14:34:23.250961 systemd-logind[1636]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:34:23.256913 systemd-logind[1636]: Removed session 5. Dec 13 14:34:30.470225 kubelet[2845]: I1213 14:34:30.470129 2845 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:34:30.470876 env[1647]: time="2024-12-13T14:34:30.470842932Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:34:30.471199 kubelet[2845]: I1213 14:34:30.471098 2845 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:34:31.400182 kubelet[2845]: I1213 14:34:31.400131 2845 topology_manager.go:215] "Topology Admit Handler" podUID="86172b55-af36-4ffd-88ca-bd894f604735" podNamespace="kube-system" podName="kube-proxy-fx5xb" Dec 13 14:34:31.412126 systemd[1]: Created slice kubepods-besteffort-pod86172b55_af36_4ffd_88ca_bd894f604735.slice. Dec 13 14:34:31.422135 kubelet[2845]: I1213 14:34:31.422105 2845 topology_manager.go:215] "Topology Admit Handler" podUID="7a9ba9b9-945a-4683-922e-5d87687737bf" podNamespace="kube-system" podName="cilium-kv2tc" Dec 13 14:34:31.444734 kubelet[2845]: I1213 14:34:31.444032 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-lib-modules\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.444734 kubelet[2845]: I1213 14:34:31.444089 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-config-path\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.444734 kubelet[2845]: I1213 14:34:31.444707 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-host-proc-sys-kernel\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.444734 kubelet[2845]: I1213 14:34:31.444741 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-etc-cni-netd\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445041 kubelet[2845]: I1213 14:34:31.444762 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-xtables-lock\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445041 kubelet[2845]: I1213 14:34:31.444790 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a9ba9b9-945a-4683-922e-5d87687737bf-clustermesh-secrets\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445041 kubelet[2845]: I1213 14:34:31.444813 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a9ba9b9-945a-4683-922e-5d87687737bf-hubble-tls\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445041 kubelet[2845]: I1213 14:34:31.444843 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86172b55-af36-4ffd-88ca-bd894f604735-lib-modules\") pod \"kube-proxy-fx5xb\" (UID: \"86172b55-af36-4ffd-88ca-bd894f604735\") " pod="kube-system/kube-proxy-fx5xb" Dec 13 14:34:31.445041 kubelet[2845]: I1213 14:34:31.444865 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-bpf-maps\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445041 kubelet[2845]: I1213 14:34:31.444885 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-cgroup\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445292 kubelet[2845]: I1213 14:34:31.444910 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-host-proc-sys-net\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445292 kubelet[2845]: I1213 14:34:31.444935 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-hostproc\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445292 kubelet[2845]: I1213 14:34:31.444958 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggcvc\" (UniqueName: \"kubernetes.io/projected/7a9ba9b9-945a-4683-922e-5d87687737bf-kube-api-access-ggcvc\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445292 kubelet[2845]: I1213 14:34:31.444983 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-run\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445292 kubelet[2845]: I1213 14:34:31.445006 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cni-path\") pod \"cilium-kv2tc\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " pod="kube-system/cilium-kv2tc" Dec 13 14:34:31.445292 kubelet[2845]: I1213 14:34:31.445032 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snpr2\" (UniqueName: \"kubernetes.io/projected/86172b55-af36-4ffd-88ca-bd894f604735-kube-api-access-snpr2\") pod \"kube-proxy-fx5xb\" (UID: \"86172b55-af36-4ffd-88ca-bd894f604735\") " pod="kube-system/kube-proxy-fx5xb" Dec 13 14:34:31.446117 kubelet[2845]: I1213 14:34:31.445056 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86172b55-af36-4ffd-88ca-bd894f604735-xtables-lock\") pod \"kube-proxy-fx5xb\" (UID: \"86172b55-af36-4ffd-88ca-bd894f604735\") " pod="kube-system/kube-proxy-fx5xb" Dec 13 14:34:31.446117 kubelet[2845]: I1213 14:34:31.445083 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86172b55-af36-4ffd-88ca-bd894f604735-kube-proxy\") pod \"kube-proxy-fx5xb\" (UID: \"86172b55-af36-4ffd-88ca-bd894f604735\") " pod="kube-system/kube-proxy-fx5xb" Dec 13 14:34:31.453075 systemd[1]: Created slice kubepods-burstable-pod7a9ba9b9_945a_4683_922e_5d87687737bf.slice. Dec 13 14:34:31.521166 kubelet[2845]: I1213 14:34:31.521120 2845 topology_manager.go:215] "Topology Admit Handler" podUID="0761bd5e-587c-4c12-96ed-46c409558955" podNamespace="kube-system" podName="cilium-operator-599987898-8jsjc" Dec 13 14:34:31.528407 systemd[1]: Created slice kubepods-besteffort-pod0761bd5e_587c_4c12_96ed_46c409558955.slice. Dec 13 14:34:31.546210 kubelet[2845]: I1213 14:34:31.546165 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0761bd5e-587c-4c12-96ed-46c409558955-cilium-config-path\") pod \"cilium-operator-599987898-8jsjc\" (UID: \"0761bd5e-587c-4c12-96ed-46c409558955\") " pod="kube-system/cilium-operator-599987898-8jsjc" Dec 13 14:34:31.547041 kubelet[2845]: I1213 14:34:31.547015 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4tq7\" (UniqueName: \"kubernetes.io/projected/0761bd5e-587c-4c12-96ed-46c409558955-kube-api-access-c4tq7\") pod \"cilium-operator-599987898-8jsjc\" (UID: \"0761bd5e-587c-4c12-96ed-46c409558955\") " pod="kube-system/cilium-operator-599987898-8jsjc" Dec 13 14:34:31.723352 env[1647]: time="2024-12-13T14:34:31.723226044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fx5xb,Uid:86172b55-af36-4ffd-88ca-bd894f604735,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:31.760905 env[1647]: time="2024-12-13T14:34:31.760849755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv2tc,Uid:7a9ba9b9-945a-4683-922e-5d87687737bf,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:31.771075 env[1647]: time="2024-12-13T14:34:31.770956965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:31.771252 env[1647]: time="2024-12-13T14:34:31.771106561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:31.771252 env[1647]: time="2024-12-13T14:34:31.771139034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:31.771440 env[1647]: time="2024-12-13T14:34:31.771393949Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02d3711d58a99dea2504f9b6f5de43b7e955097a4e13ae6f7d78aa4ddd5fa6f9 pid=2926 runtime=io.containerd.runc.v2 Dec 13 14:34:31.813020 systemd[1]: Started cri-containerd-02d3711d58a99dea2504f9b6f5de43b7e955097a4e13ae6f7d78aa4ddd5fa6f9.scope. Dec 13 14:34:31.833762 env[1647]: time="2024-12-13T14:34:31.833712168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8jsjc,Uid:0761bd5e-587c-4c12-96ed-46c409558955,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:31.837496 env[1647]: time="2024-12-13T14:34:31.836492179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:31.839070 env[1647]: time="2024-12-13T14:34:31.837518220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:31.839070 env[1647]: time="2024-12-13T14:34:31.837551285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:31.839360 env[1647]: time="2024-12-13T14:34:31.839252732Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196 pid=2958 runtime=io.containerd.runc.v2 Dec 13 14:34:31.887511 systemd[1]: Started cri-containerd-aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196.scope. Dec 13 14:34:31.920210 env[1647]: time="2024-12-13T14:34:31.920155494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fx5xb,Uid:86172b55-af36-4ffd-88ca-bd894f604735,Namespace:kube-system,Attempt:0,} returns sandbox id \"02d3711d58a99dea2504f9b6f5de43b7e955097a4e13ae6f7d78aa4ddd5fa6f9\"" Dec 13 14:34:31.939246 env[1647]: time="2024-12-13T14:34:31.939092572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:31.939246 env[1647]: time="2024-12-13T14:34:31.939155450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:31.939246 env[1647]: time="2024-12-13T14:34:31.939171058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:31.945471 env[1647]: time="2024-12-13T14:34:31.939744502Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6 pid=2991 runtime=io.containerd.runc.v2 Dec 13 14:34:31.986986 env[1647]: time="2024-12-13T14:34:31.985960984Z" level=info msg="CreateContainer within sandbox \"02d3711d58a99dea2504f9b6f5de43b7e955097a4e13ae6f7d78aa4ddd5fa6f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:34:32.012613 systemd[1]: Started cri-containerd-37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6.scope. Dec 13 14:34:32.054136 env[1647]: time="2024-12-13T14:34:32.054014184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv2tc,Uid:7a9ba9b9-945a-4683-922e-5d87687737bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\"" Dec 13 14:34:32.059104 env[1647]: time="2024-12-13T14:34:32.059060615Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:34:32.073000 env[1647]: time="2024-12-13T14:34:32.072942584Z" level=info msg="CreateContainer within sandbox \"02d3711d58a99dea2504f9b6f5de43b7e955097a4e13ae6f7d78aa4ddd5fa6f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"affeb5929a965660becfb40eec651434bd3953e2217376b7ab40163d821b2e82\"" Dec 13 14:34:32.076093 env[1647]: time="2024-12-13T14:34:32.076052476Z" level=info msg="StartContainer for \"affeb5929a965660becfb40eec651434bd3953e2217376b7ab40163d821b2e82\"" Dec 13 14:34:32.115073 systemd[1]: Started cri-containerd-affeb5929a965660becfb40eec651434bd3953e2217376b7ab40163d821b2e82.scope. Dec 13 14:34:32.139648 env[1647]: time="2024-12-13T14:34:32.139594065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8jsjc,Uid:0761bd5e-587c-4c12-96ed-46c409558955,Namespace:kube-system,Attempt:0,} returns sandbox id \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\"" Dec 13 14:34:32.187588 env[1647]: time="2024-12-13T14:34:32.187465581Z" level=info msg="StartContainer for \"affeb5929a965660becfb40eec651434bd3953e2217376b7ab40163d821b2e82\" returns successfully" Dec 13 14:34:32.481645 kubelet[2845]: I1213 14:34:32.481565 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fx5xb" podStartSLOduration=1.4815443529999999 podStartE2EDuration="1.481544353s" podCreationTimestamp="2024-12-13 14:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:34:32.480968242 +0000 UTC m=+13.391265690" watchObservedRunningTime="2024-12-13 14:34:32.481544353 +0000 UTC m=+13.391841791" Dec 13 14:34:44.239271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200279240.mount: Deactivated successfully. Dec 13 14:34:48.680949 env[1647]: time="2024-12-13T14:34:48.680883986Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:48.691747 env[1647]: time="2024-12-13T14:34:48.691696163Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:48.695090 env[1647]: time="2024-12-13T14:34:48.694986009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:48.695756 env[1647]: time="2024-12-13T14:34:48.695713498Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:34:48.699066 env[1647]: time="2024-12-13T14:34:48.698844829Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:34:48.700080 env[1647]: time="2024-12-13T14:34:48.700045309Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:34:48.733565 env[1647]: time="2024-12-13T14:34:48.733493632Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\"" Dec 13 14:34:48.736230 env[1647]: time="2024-12-13T14:34:48.734863144Z" level=info msg="StartContainer for \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\"" Dec 13 14:34:48.764119 systemd[1]: Started cri-containerd-c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780.scope. Dec 13 14:34:48.777813 systemd[1]: run-containerd-runc-k8s.io-c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780-runc.IhNahh.mount: Deactivated successfully. Dec 13 14:34:48.828622 env[1647]: time="2024-12-13T14:34:48.828565581Z" level=info msg="StartContainer for \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\" returns successfully" Dec 13 14:34:48.841638 systemd[1]: cri-containerd-c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780.scope: Deactivated successfully. Dec 13 14:34:49.020979 env[1647]: time="2024-12-13T14:34:49.020842430Z" level=info msg="shim disconnected" id=c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780 Dec 13 14:34:49.020979 env[1647]: time="2024-12-13T14:34:49.020894199Z" level=warning msg="cleaning up after shim disconnected" id=c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780 namespace=k8s.io Dec 13 14:34:49.020979 env[1647]: time="2024-12-13T14:34:49.020907763Z" level=info msg="cleaning up dead shim" Dec 13 14:34:49.042641 env[1647]: time="2024-12-13T14:34:49.042505527Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3249 runtime=io.containerd.runc.v2\n" Dec 13 14:34:49.550188 env[1647]: time="2024-12-13T14:34:49.550136376Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:34:49.585502 env[1647]: time="2024-12-13T14:34:49.585445996Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\"" Dec 13 14:34:49.586439 env[1647]: time="2024-12-13T14:34:49.586399893Z" level=info msg="StartContainer for \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\"" Dec 13 14:34:49.647757 systemd[1]: Started cri-containerd-369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa.scope. Dec 13 14:34:49.711851 env[1647]: time="2024-12-13T14:34:49.711481550Z" level=info msg="StartContainer for \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\" returns successfully" Dec 13 14:34:49.727388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780-rootfs.mount: Deactivated successfully. Dec 13 14:34:49.752185 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:34:49.753543 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:34:49.754772 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:34:49.774972 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:34:49.804666 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:34:49.813221 systemd[1]: cri-containerd-369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa.scope: Deactivated successfully. Dec 13 14:34:49.871344 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:34:49.898362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa-rootfs.mount: Deactivated successfully. Dec 13 14:34:49.921016 env[1647]: time="2024-12-13T14:34:49.920886978Z" level=info msg="shim disconnected" id=369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa Dec 13 14:34:49.921427 env[1647]: time="2024-12-13T14:34:49.921366969Z" level=warning msg="cleaning up after shim disconnected" id=369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa namespace=k8s.io Dec 13 14:34:49.921427 env[1647]: time="2024-12-13T14:34:49.921419391Z" level=info msg="cleaning up dead shim" Dec 13 14:34:49.945452 env[1647]: time="2024-12-13T14:34:49.945332189Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3315 runtime=io.containerd.runc.v2\n" Dec 13 14:34:50.579497 env[1647]: time="2024-12-13T14:34:50.579440064Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:34:50.684278 env[1647]: time="2024-12-13T14:34:50.684214142Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\"" Dec 13 14:34:50.687029 env[1647]: time="2024-12-13T14:34:50.686987096Z" level=info msg="StartContainer for \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\"" Dec 13 14:34:50.720418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771500125.mount: Deactivated successfully. Dec 13 14:34:50.743439 systemd[1]: Started cri-containerd-18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611.scope. Dec 13 14:34:50.762540 systemd[1]: run-containerd-runc-k8s.io-18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611-runc.mAW3VA.mount: Deactivated successfully. Dec 13 14:34:50.835683 env[1647]: time="2024-12-13T14:34:50.835573449Z" level=info msg="StartContainer for \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\" returns successfully" Dec 13 14:34:50.838976 systemd[1]: cri-containerd-18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611.scope: Deactivated successfully. Dec 13 14:34:50.893733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611-rootfs.mount: Deactivated successfully. Dec 13 14:34:51.046016 env[1647]: time="2024-12-13T14:34:51.045961562Z" level=info msg="shim disconnected" id=18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611 Dec 13 14:34:51.046016 env[1647]: time="2024-12-13T14:34:51.046017059Z" level=warning msg="cleaning up after shim disconnected" id=18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611 namespace=k8s.io Dec 13 14:34:51.046336 env[1647]: time="2024-12-13T14:34:51.046029831Z" level=info msg="cleaning up dead shim" Dec 13 14:34:51.081716 env[1647]: time="2024-12-13T14:34:51.081671042Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3375 runtime=io.containerd.runc.v2\n" Dec 13 14:34:51.266745 env[1647]: time="2024-12-13T14:34:51.266693350Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:51.270165 env[1647]: time="2024-12-13T14:34:51.270118591Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:51.273469 env[1647]: time="2024-12-13T14:34:51.273366577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:51.274063 env[1647]: time="2024-12-13T14:34:51.274026044Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:34:51.279254 env[1647]: time="2024-12-13T14:34:51.279215236Z" level=info msg="CreateContainer within sandbox \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:34:51.312039 env[1647]: time="2024-12-13T14:34:51.311981736Z" level=info msg="CreateContainer within sandbox \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\"" Dec 13 14:34:51.312806 env[1647]: time="2024-12-13T14:34:51.312661521Z" level=info msg="StartContainer for \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\"" Dec 13 14:34:51.338300 systemd[1]: Started cri-containerd-f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8.scope. Dec 13 14:34:51.393242 env[1647]: time="2024-12-13T14:34:51.393186611Z" level=info msg="StartContainer for \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\" returns successfully" Dec 13 14:34:51.593306 env[1647]: time="2024-12-13T14:34:51.589162774Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:34:51.655364 env[1647]: time="2024-12-13T14:34:51.655306770Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\"" Dec 13 14:34:51.657057 env[1647]: time="2024-12-13T14:34:51.657018795Z" level=info msg="StartContainer for \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\"" Dec 13 14:34:51.697290 systemd[1]: Started cri-containerd-cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f.scope. Dec 13 14:34:51.726989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839967114.mount: Deactivated successfully. Dec 13 14:34:51.845857 systemd[1]: cri-containerd-cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f.scope: Deactivated successfully. Dec 13 14:34:51.848396 env[1647]: time="2024-12-13T14:34:51.848340844Z" level=info msg="StartContainer for \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\" returns successfully" Dec 13 14:34:51.897956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f-rootfs.mount: Deactivated successfully. Dec 13 14:34:51.968138 env[1647]: time="2024-12-13T14:34:51.968078777Z" level=info msg="shim disconnected" id=cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f Dec 13 14:34:51.968490 env[1647]: time="2024-12-13T14:34:51.968454526Z" level=warning msg="cleaning up after shim disconnected" id=cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f namespace=k8s.io Dec 13 14:34:51.968600 env[1647]: time="2024-12-13T14:34:51.968582823Z" level=info msg="cleaning up dead shim" Dec 13 14:34:51.987189 env[1647]: time="2024-12-13T14:34:51.987140998Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3468 runtime=io.containerd.runc.v2\n" Dec 13 14:34:52.625401 env[1647]: time="2024-12-13T14:34:52.611300779Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:34:52.661830 env[1647]: time="2024-12-13T14:34:52.661775941Z" level=info msg="CreateContainer within sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\"" Dec 13 14:34:52.662908 env[1647]: time="2024-12-13T14:34:52.662873540Z" level=info msg="StartContainer for \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\"" Dec 13 14:34:52.711660 systemd[1]: Started cri-containerd-186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6.scope. Dec 13 14:34:52.751811 kubelet[2845]: I1213 14:34:52.751731 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-8jsjc" podStartSLOduration=2.616694393 podStartE2EDuration="21.751708736s" podCreationTimestamp="2024-12-13 14:34:31 +0000 UTC" firstStartedPulling="2024-12-13 14:34:32.140726199 +0000 UTC m=+13.051023632" lastFinishedPulling="2024-12-13 14:34:51.275740522 +0000 UTC m=+32.186037975" observedRunningTime="2024-12-13 14:34:52.058022315 +0000 UTC m=+32.968319761" watchObservedRunningTime="2024-12-13 14:34:52.751708736 +0000 UTC m=+33.662006188" Dec 13 14:34:52.790240 env[1647]: time="2024-12-13T14:34:52.790181671Z" level=info msg="StartContainer for \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\" returns successfully" Dec 13 14:34:53.231225 kubelet[2845]: I1213 14:34:53.231188 2845 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:34:53.274352 kubelet[2845]: I1213 14:34:53.274306 2845 topology_manager.go:215] "Topology Admit Handler" podUID="58c0bd14-ab1d-4658-8a37-994d69630c96" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rttqf" Dec 13 14:34:53.281475 kubelet[2845]: I1213 14:34:53.281435 2845 topology_manager.go:215] "Topology Admit Handler" podUID="0ae597c1-e5c9-4cc0-8053-41d6917dd056" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qs2x6" Dec 13 14:34:53.281544 systemd[1]: Created slice kubepods-burstable-pod58c0bd14_ab1d_4658_8a37_994d69630c96.slice. Dec 13 14:34:53.295334 systemd[1]: Created slice kubepods-burstable-pod0ae597c1_e5c9_4cc0_8053_41d6917dd056.slice. Dec 13 14:34:53.338637 kubelet[2845]: I1213 14:34:53.338580 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58c0bd14-ab1d-4658-8a37-994d69630c96-config-volume\") pod \"coredns-7db6d8ff4d-rttqf\" (UID: \"58c0bd14-ab1d-4658-8a37-994d69630c96\") " pod="kube-system/coredns-7db6d8ff4d-rttqf" Dec 13 14:34:53.338861 kubelet[2845]: I1213 14:34:53.338842 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhpmt\" (UniqueName: \"kubernetes.io/projected/0ae597c1-e5c9-4cc0-8053-41d6917dd056-kube-api-access-qhpmt\") pod \"coredns-7db6d8ff4d-qs2x6\" (UID: \"0ae597c1-e5c9-4cc0-8053-41d6917dd056\") " pod="kube-system/coredns-7db6d8ff4d-qs2x6" Dec 13 14:34:53.339002 kubelet[2845]: I1213 14:34:53.338986 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pdxr\" (UniqueName: \"kubernetes.io/projected/58c0bd14-ab1d-4658-8a37-994d69630c96-kube-api-access-8pdxr\") pod \"coredns-7db6d8ff4d-rttqf\" (UID: \"58c0bd14-ab1d-4658-8a37-994d69630c96\") " pod="kube-system/coredns-7db6d8ff4d-rttqf" Dec 13 14:34:53.339132 kubelet[2845]: I1213 14:34:53.339116 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ae597c1-e5c9-4cc0-8053-41d6917dd056-config-volume\") pod \"coredns-7db6d8ff4d-qs2x6\" (UID: \"0ae597c1-e5c9-4cc0-8053-41d6917dd056\") " pod="kube-system/coredns-7db6d8ff4d-qs2x6" Dec 13 14:34:53.592503 env[1647]: time="2024-12-13T14:34:53.592065975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rttqf,Uid:58c0bd14-ab1d-4658-8a37-994d69630c96,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:53.604409 env[1647]: time="2024-12-13T14:34:53.604345575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qs2x6,Uid:0ae597c1-e5c9-4cc0-8053-41d6917dd056,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:53.664767 kubelet[2845]: I1213 14:34:53.664702 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kv2tc" podStartSLOduration=6.023402156 podStartE2EDuration="22.664677155s" podCreationTimestamp="2024-12-13 14:34:31 +0000 UTC" firstStartedPulling="2024-12-13 14:34:32.055892384 +0000 UTC m=+12.966189814" lastFinishedPulling="2024-12-13 14:34:48.697167378 +0000 UTC m=+29.607464813" observedRunningTime="2024-12-13 14:34:53.663308937 +0000 UTC m=+34.573606386" watchObservedRunningTime="2024-12-13 14:34:53.664677155 +0000 UTC m=+34.574974603" Dec 13 14:34:55.968943 (udev-worker)[3596]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:55.969961 systemd-networkd[1370]: cilium_host: Link UP Dec 13 14:34:55.973300 systemd-networkd[1370]: cilium_net: Link UP Dec 13 14:34:55.973549 systemd-networkd[1370]: cilium_net: Gained carrier Dec 13 14:34:55.974836 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:34:55.974920 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:34:55.975249 systemd-networkd[1370]: cilium_host: Gained carrier Dec 13 14:34:55.975588 (udev-worker)[3633]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:56.347960 (udev-worker)[3640]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:56.360256 systemd-networkd[1370]: cilium_vxlan: Link UP Dec 13 14:34:56.360265 systemd-networkd[1370]: cilium_vxlan: Gained carrier Dec 13 14:34:56.560895 systemd-networkd[1370]: cilium_net: Gained IPv6LL Dec 13 14:34:56.883894 systemd-networkd[1370]: cilium_host: Gained IPv6LL Dec 13 14:34:57.009401 kernel: NET: Registered PF_ALG protocol family Dec 13 14:34:57.392556 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Dec 13 14:34:58.316065 (udev-worker)[3638]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:58.318434 systemd-networkd[1370]: lxc_health: Link UP Dec 13 14:34:58.330873 systemd-networkd[1370]: lxc_health: Gained carrier Dec 13 14:34:58.331733 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:34:58.813201 systemd-networkd[1370]: lxc031cec6995cf: Link UP Dec 13 14:34:58.820399 kernel: eth0: renamed from tmpde9fe Dec 13 14:34:58.826763 systemd-networkd[1370]: lxc031cec6995cf: Gained carrier Dec 13 14:34:58.827413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc031cec6995cf: link becomes ready Dec 13 14:34:58.836796 systemd-networkd[1370]: lxc2d4be927809f: Link UP Dec 13 14:34:58.841406 kernel: eth0: renamed from tmpb66bc Dec 13 14:34:58.850078 systemd-networkd[1370]: lxc2d4be927809f: Gained carrier Dec 13 14:34:58.850473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2d4be927809f: link becomes ready Dec 13 14:34:59.824772 systemd-networkd[1370]: lxc_health: Gained IPv6LL Dec 13 14:35:00.528658 systemd-networkd[1370]: lxc2d4be927809f: Gained IPv6LL Dec 13 14:35:00.656620 systemd-networkd[1370]: lxc031cec6995cf: Gained IPv6LL Dec 13 14:35:04.676635 systemd[1]: Started sshd@5-172.31.18.151:22-139.178.89.65:48554.service. Dec 13 14:35:04.885189 sshd[3999]: Accepted publickey for core from 139.178.89.65 port 48554 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:04.888622 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:04.897395 systemd[1]: Started session-6.scope. Dec 13 14:35:04.899487 systemd-logind[1636]: New session 6 of user core. Dec 13 14:35:05.380243 sshd[3999]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:05.386272 systemd-logind[1636]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:35:05.389307 systemd[1]: sshd@5-172.31.18.151:22-139.178.89.65:48554.service: Deactivated successfully. Dec 13 14:35:05.390657 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:35:05.393752 systemd-logind[1636]: Removed session 6. Dec 13 14:35:06.304519 amazon-ssm-agent[1624]: 2024-12-13 14:35:06 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:35:08.332285 env[1647]: time="2024-12-13T14:35:08.331876229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:08.332285 env[1647]: time="2024-12-13T14:35:08.332033324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:08.332285 env[1647]: time="2024-12-13T14:35:08.332056404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:08.333094 env[1647]: time="2024-12-13T14:35:08.333005137Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b66bc09dfe1a89f18864f45182157b66a9a25fee0aef98a4bab73cea7eb779bf pid=4026 runtime=io.containerd.runc.v2 Dec 13 14:35:08.363472 systemd[1]: Started cri-containerd-b66bc09dfe1a89f18864f45182157b66a9a25fee0aef98a4bab73cea7eb779bf.scope. Dec 13 14:35:08.382247 systemd[1]: run-containerd-runc-k8s.io-b66bc09dfe1a89f18864f45182157b66a9a25fee0aef98a4bab73cea7eb779bf-runc.3Y4AV3.mount: Deactivated successfully. Dec 13 14:35:08.412327 env[1647]: time="2024-12-13T14:35:08.409690559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:08.412327 env[1647]: time="2024-12-13T14:35:08.409750590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:08.412327 env[1647]: time="2024-12-13T14:35:08.409768060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:08.412327 env[1647]: time="2024-12-13T14:35:08.409942034Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de9fef143fb0eb828426037a2fd6437522e6d5da832b8bc300e09a9c17574ed7 pid=4055 runtime=io.containerd.runc.v2 Dec 13 14:35:08.448588 systemd[1]: Started cri-containerd-de9fef143fb0eb828426037a2fd6437522e6d5da832b8bc300e09a9c17574ed7.scope. Dec 13 14:35:08.536698 env[1647]: time="2024-12-13T14:35:08.536650621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qs2x6,Uid:0ae597c1-e5c9-4cc0-8053-41d6917dd056,Namespace:kube-system,Attempt:0,} returns sandbox id \"b66bc09dfe1a89f18864f45182157b66a9a25fee0aef98a4bab73cea7eb779bf\"" Dec 13 14:35:08.544983 env[1647]: time="2024-12-13T14:35:08.544933519Z" level=info msg="CreateContainer within sandbox \"b66bc09dfe1a89f18864f45182157b66a9a25fee0aef98a4bab73cea7eb779bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:35:08.619981 env[1647]: time="2024-12-13T14:35:08.598860953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rttqf,Uid:58c0bd14-ab1d-4658-8a37-994d69630c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"de9fef143fb0eb828426037a2fd6437522e6d5da832b8bc300e09a9c17574ed7\"" Dec 13 14:35:08.620882 env[1647]: time="2024-12-13T14:35:08.620811799Z" level=info msg="CreateContainer within sandbox \"b66bc09dfe1a89f18864f45182157b66a9a25fee0aef98a4bab73cea7eb779bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dbe8272fd0d90c14a41e6ebfa49f870b402345b883f153b942bc09d8a7c67f95\"" Dec 13 14:35:08.621758 env[1647]: time="2024-12-13T14:35:08.621728532Z" level=info msg="StartContainer for \"dbe8272fd0d90c14a41e6ebfa49f870b402345b883f153b942bc09d8a7c67f95\"" Dec 13 14:35:08.622580 env[1647]: time="2024-12-13T14:35:08.622547228Z" level=info msg="CreateContainer within sandbox \"de9fef143fb0eb828426037a2fd6437522e6d5da832b8bc300e09a9c17574ed7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:35:08.664592 systemd[1]: Started cri-containerd-dbe8272fd0d90c14a41e6ebfa49f870b402345b883f153b942bc09d8a7c67f95.scope. Dec 13 14:35:08.667089 env[1647]: time="2024-12-13T14:35:08.666427647Z" level=info msg="CreateContainer within sandbox \"de9fef143fb0eb828426037a2fd6437522e6d5da832b8bc300e09a9c17574ed7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"adc38920ee5bd09bdfee8f1577884370f3c72fa3f56f9d8930d4eb96d13a859d\"" Dec 13 14:35:08.669831 env[1647]: time="2024-12-13T14:35:08.669793290Z" level=info msg="StartContainer for \"adc38920ee5bd09bdfee8f1577884370f3c72fa3f56f9d8930d4eb96d13a859d\"" Dec 13 14:35:08.722189 systemd[1]: Started cri-containerd-adc38920ee5bd09bdfee8f1577884370f3c72fa3f56f9d8930d4eb96d13a859d.scope. Dec 13 14:35:08.756676 env[1647]: time="2024-12-13T14:35:08.756626415Z" level=info msg="StartContainer for \"dbe8272fd0d90c14a41e6ebfa49f870b402345b883f153b942bc09d8a7c67f95\" returns successfully" Dec 13 14:35:08.780756 env[1647]: time="2024-12-13T14:35:08.779816764Z" level=info msg="StartContainer for \"adc38920ee5bd09bdfee8f1577884370f3c72fa3f56f9d8930d4eb96d13a859d\" returns successfully" Dec 13 14:35:09.762441 kubelet[2845]: I1213 14:35:09.759805 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qs2x6" podStartSLOduration=38.759779466 podStartE2EDuration="38.759779466s" podCreationTimestamp="2024-12-13 14:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:35:09.716705095 +0000 UTC m=+50.627002544" watchObservedRunningTime="2024-12-13 14:35:09.759779466 +0000 UTC m=+50.670076914" Dec 13 14:35:09.799307 kubelet[2845]: I1213 14:35:09.799240 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rttqf" podStartSLOduration=38.799204256 podStartE2EDuration="38.799204256s" podCreationTimestamp="2024-12-13 14:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:35:09.760888876 +0000 UTC m=+50.671186326" watchObservedRunningTime="2024-12-13 14:35:09.799204256 +0000 UTC m=+50.709501705" Dec 13 14:35:10.405808 systemd[1]: Started sshd@6-172.31.18.151:22-139.178.89.65:51472.service. Dec 13 14:35:10.609522 sshd[4188]: Accepted publickey for core from 139.178.89.65 port 51472 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:10.613975 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:10.638468 systemd[1]: Started session-7.scope. Dec 13 14:35:10.640701 systemd-logind[1636]: New session 7 of user core. Dec 13 14:35:11.018656 sshd[4188]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:11.023413 systemd[1]: sshd@6-172.31.18.151:22-139.178.89.65:51472.service: Deactivated successfully. Dec 13 14:35:11.025052 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:35:11.026268 systemd-logind[1636]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:35:11.027931 systemd-logind[1636]: Removed session 7. Dec 13 14:35:16.058128 systemd[1]: Started sshd@7-172.31.18.151:22-139.178.89.65:51482.service. Dec 13 14:35:16.235241 sshd[4203]: Accepted publickey for core from 139.178.89.65 port 51482 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:16.237068 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:16.244570 systemd[1]: Started session-8.scope. Dec 13 14:35:16.246575 systemd-logind[1636]: New session 8 of user core. Dec 13 14:35:16.575503 sshd[4203]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:16.582638 systemd[1]: sshd@7-172.31.18.151:22-139.178.89.65:51482.service: Deactivated successfully. Dec 13 14:35:16.583826 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:35:16.584770 systemd-logind[1636]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:35:16.586481 systemd-logind[1636]: Removed session 8. Dec 13 14:35:21.602323 systemd[1]: Started sshd@8-172.31.18.151:22-139.178.89.65:41018.service. Dec 13 14:35:21.765226 sshd[4218]: Accepted publickey for core from 139.178.89.65 port 41018 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:21.768837 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:21.786675 systemd-logind[1636]: New session 9 of user core. Dec 13 14:35:21.787341 systemd[1]: Started session-9.scope. Dec 13 14:35:22.007886 sshd[4218]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:22.011505 systemd[1]: sshd@8-172.31.18.151:22-139.178.89.65:41018.service: Deactivated successfully. Dec 13 14:35:22.012567 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:35:22.013501 systemd-logind[1636]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:35:22.014768 systemd-logind[1636]: Removed session 9. Dec 13 14:35:27.043032 systemd[1]: Started sshd@9-172.31.18.151:22-139.178.89.65:41032.service. Dec 13 14:35:27.232964 sshd[4230]: Accepted publickey for core from 139.178.89.65 port 41032 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:27.234656 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:27.244500 systemd[1]: Started session-10.scope. Dec 13 14:35:27.245542 systemd-logind[1636]: New session 10 of user core. Dec 13 14:35:27.464585 sshd[4230]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:27.468776 systemd-logind[1636]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:35:27.469765 systemd[1]: sshd@9-172.31.18.151:22-139.178.89.65:41032.service: Deactivated successfully. Dec 13 14:35:27.470812 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:35:27.471878 systemd-logind[1636]: Removed session 10. Dec 13 14:35:32.492258 systemd[1]: Started sshd@10-172.31.18.151:22-139.178.89.65:34294.service. Dec 13 14:35:32.677239 sshd[4245]: Accepted publickey for core from 139.178.89.65 port 34294 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:32.679820 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:32.686908 systemd[1]: Started session-11.scope. Dec 13 14:35:32.687517 systemd-logind[1636]: New session 11 of user core. Dec 13 14:35:32.917991 sshd[4245]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:32.922939 systemd-logind[1636]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:35:32.923461 systemd[1]: sshd@10-172.31.18.151:22-139.178.89.65:34294.service: Deactivated successfully. Dec 13 14:35:32.931875 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:35:32.935734 systemd-logind[1636]: Removed session 11. Dec 13 14:35:37.950139 systemd[1]: Started sshd@11-172.31.18.151:22-139.178.89.65:34296.service. Dec 13 14:35:38.125752 sshd[4259]: Accepted publickey for core from 139.178.89.65 port 34296 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:38.127798 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:38.134341 systemd[1]: Started session-12.scope. Dec 13 14:35:38.134856 systemd-logind[1636]: New session 12 of user core. Dec 13 14:35:38.421282 sshd[4259]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:38.425499 systemd[1]: sshd@11-172.31.18.151:22-139.178.89.65:34296.service: Deactivated successfully. Dec 13 14:35:38.426523 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:35:38.427821 systemd-logind[1636]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:35:38.428911 systemd-logind[1636]: Removed session 12. Dec 13 14:35:43.449402 systemd[1]: Started sshd@12-172.31.18.151:22-139.178.89.65:52084.service. Dec 13 14:35:43.613975 sshd[4272]: Accepted publickey for core from 139.178.89.65 port 52084 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:43.623083 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:43.641318 systemd-logind[1636]: New session 13 of user core. Dec 13 14:35:43.642752 systemd[1]: Started session-13.scope. Dec 13 14:35:43.879552 sshd[4272]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:43.884708 systemd[1]: sshd@12-172.31.18.151:22-139.178.89.65:52084.service: Deactivated successfully. Dec 13 14:35:43.885463 systemd-logind[1636]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:35:43.885722 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:35:43.887063 systemd-logind[1636]: Removed session 13. Dec 13 14:35:43.906047 systemd[1]: Started sshd@13-172.31.18.151:22-139.178.89.65:52096.service. Dec 13 14:35:44.086158 sshd[4284]: Accepted publickey for core from 139.178.89.65 port 52096 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:44.090345 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:44.098427 systemd-logind[1636]: New session 14 of user core. Dec 13 14:35:44.098917 systemd[1]: Started session-14.scope. Dec 13 14:35:44.516026 sshd[4284]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:44.527806 systemd[1]: sshd@13-172.31.18.151:22-139.178.89.65:52096.service: Deactivated successfully. Dec 13 14:35:44.528901 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:35:44.530569 systemd-logind[1636]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:35:44.533226 systemd-logind[1636]: Removed session 14. Dec 13 14:35:44.547278 systemd[1]: Started sshd@14-172.31.18.151:22-139.178.89.65:52108.service. Dec 13 14:35:44.730516 sshd[4294]: Accepted publickey for core from 139.178.89.65 port 52108 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:44.732270 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:44.741247 systemd[1]: Started session-15.scope. Dec 13 14:35:44.742286 systemd-logind[1636]: New session 15 of user core. Dec 13 14:35:44.961220 sshd[4294]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:44.965916 systemd[1]: sshd@14-172.31.18.151:22-139.178.89.65:52108.service: Deactivated successfully. Dec 13 14:35:44.966890 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:35:44.967595 systemd-logind[1636]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:35:44.968770 systemd-logind[1636]: Removed session 15. Dec 13 14:35:50.011423 systemd[1]: Started sshd@15-172.31.18.151:22-139.178.89.65:39620.service. Dec 13 14:35:50.177776 sshd[4305]: Accepted publickey for core from 139.178.89.65 port 39620 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:50.180943 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:50.188846 systemd[1]: Started session-16.scope. Dec 13 14:35:50.189725 systemd-logind[1636]: New session 16 of user core. Dec 13 14:35:50.422868 sshd[4305]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:50.441133 systemd[1]: sshd@15-172.31.18.151:22-139.178.89.65:39620.service: Deactivated successfully. Dec 13 14:35:50.446422 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:35:50.462055 systemd-logind[1636]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:35:50.468215 systemd-logind[1636]: Removed session 16. Dec 13 14:35:55.450275 systemd[1]: Started sshd@16-172.31.18.151:22-139.178.89.65:39636.service. Dec 13 14:35:55.612445 sshd[4317]: Accepted publickey for core from 139.178.89.65 port 39636 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:55.614357 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:55.619980 systemd[1]: Started session-17.scope. Dec 13 14:35:55.620470 systemd-logind[1636]: New session 17 of user core. Dec 13 14:35:55.828814 sshd[4317]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:55.832755 systemd[1]: sshd@16-172.31.18.151:22-139.178.89.65:39636.service: Deactivated successfully. Dec 13 14:35:55.833782 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:35:55.834555 systemd-logind[1636]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:35:55.835645 systemd-logind[1636]: Removed session 17. Dec 13 14:36:00.855254 systemd[1]: Started sshd@17-172.31.18.151:22-139.178.89.65:55398.service. Dec 13 14:36:01.016344 sshd[4329]: Accepted publickey for core from 139.178.89.65 port 55398 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:01.018107 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:01.024511 systemd[1]: Started session-18.scope. Dec 13 14:36:01.025537 systemd-logind[1636]: New session 18 of user core. Dec 13 14:36:01.223975 sshd[4329]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:01.236108 systemd[1]: sshd@17-172.31.18.151:22-139.178.89.65:55398.service: Deactivated successfully. Dec 13 14:36:01.239106 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:36:01.246796 systemd-logind[1636]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:36:01.265762 systemd[1]: Started sshd@18-172.31.18.151:22-139.178.89.65:55406.service. Dec 13 14:36:01.270588 systemd-logind[1636]: Removed session 18. Dec 13 14:36:01.462940 sshd[4341]: Accepted publickey for core from 139.178.89.65 port 55406 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:01.466103 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:01.496457 systemd[1]: Started session-19.scope. Dec 13 14:36:01.498521 systemd-logind[1636]: New session 19 of user core. Dec 13 14:36:02.820639 sshd[4341]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:02.865588 systemd[1]: Started sshd@19-172.31.18.151:22-139.178.89.65:55420.service. Dec 13 14:36:02.871442 systemd[1]: sshd@18-172.31.18.151:22-139.178.89.65:55406.service: Deactivated successfully. Dec 13 14:36:02.873262 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:36:02.889973 systemd-logind[1636]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:36:02.901510 systemd-logind[1636]: Removed session 19. Dec 13 14:36:03.141268 sshd[4352]: Accepted publickey for core from 139.178.89.65 port 55420 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:03.149104 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:03.178955 systemd-logind[1636]: New session 20 of user core. Dec 13 14:36:03.179628 systemd[1]: Started session-20.scope. Dec 13 14:36:06.461406 sshd[4352]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:06.465845 systemd-logind[1636]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:36:06.467679 systemd[1]: sshd@19-172.31.18.151:22-139.178.89.65:55420.service: Deactivated successfully. Dec 13 14:36:06.468624 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:36:06.470829 systemd-logind[1636]: Removed session 20. Dec 13 14:36:06.483911 systemd[1]: Started sshd@20-172.31.18.151:22-139.178.89.65:55424.service. Dec 13 14:36:06.652255 sshd[4370]: Accepted publickey for core from 139.178.89.65 port 55424 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:06.654323 sshd[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:06.659772 systemd[1]: Started session-21.scope. Dec 13 14:36:06.660629 systemd-logind[1636]: New session 21 of user core. Dec 13 14:36:07.225187 sshd[4370]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:07.230275 systemd-logind[1636]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:36:07.231726 systemd[1]: sshd@20-172.31.18.151:22-139.178.89.65:55424.service: Deactivated successfully. Dec 13 14:36:07.233264 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:36:07.234703 systemd-logind[1636]: Removed session 21. Dec 13 14:36:07.255881 systemd[1]: Started sshd@21-172.31.18.151:22-139.178.89.65:55432.service. Dec 13 14:36:07.433575 sshd[4379]: Accepted publickey for core from 139.178.89.65 port 55432 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:07.435869 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:07.443484 systemd-logind[1636]: New session 22 of user core. Dec 13 14:36:07.443783 systemd[1]: Started session-22.scope. Dec 13 14:36:07.673120 sshd[4379]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:07.684539 systemd[1]: sshd@21-172.31.18.151:22-139.178.89.65:55432.service: Deactivated successfully. Dec 13 14:36:07.685640 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:36:07.687290 systemd-logind[1636]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:36:07.688310 systemd-logind[1636]: Removed session 22. Dec 13 14:36:12.702156 systemd[1]: Started sshd@22-172.31.18.151:22-139.178.89.65:50632.service. Dec 13 14:36:12.876651 sshd[4390]: Accepted publickey for core from 139.178.89.65 port 50632 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:12.878873 sshd[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:12.896787 systemd-logind[1636]: New session 23 of user core. Dec 13 14:36:12.896867 systemd[1]: Started session-23.scope. Dec 13 14:36:13.161932 sshd[4390]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:13.166039 systemd[1]: sshd@22-172.31.18.151:22-139.178.89.65:50632.service: Deactivated successfully. Dec 13 14:36:13.167579 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:36:13.168682 systemd-logind[1636]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:36:13.169989 systemd-logind[1636]: Removed session 23. Dec 13 14:36:18.188395 systemd[1]: Started sshd@23-172.31.18.151:22-139.178.89.65:50304.service. Dec 13 14:36:18.347935 sshd[4405]: Accepted publickey for core from 139.178.89.65 port 50304 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:18.349818 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:18.355496 systemd-logind[1636]: New session 24 of user core. Dec 13 14:36:18.356140 systemd[1]: Started session-24.scope. Dec 13 14:36:18.616024 sshd[4405]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:18.619841 systemd[1]: sshd@23-172.31.18.151:22-139.178.89.65:50304.service: Deactivated successfully. Dec 13 14:36:18.620723 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:36:18.621489 systemd-logind[1636]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:36:18.622485 systemd-logind[1636]: Removed session 24. Dec 13 14:36:23.645658 systemd[1]: Started sshd@24-172.31.18.151:22-139.178.89.65:50320.service. Dec 13 14:36:23.812723 sshd[4419]: Accepted publickey for core from 139.178.89.65 port 50320 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:23.814427 sshd[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:23.822456 systemd-logind[1636]: New session 25 of user core. Dec 13 14:36:23.822497 systemd[1]: Started session-25.scope. Dec 13 14:36:24.072572 sshd[4419]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:24.076457 systemd-logind[1636]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:36:24.076715 systemd[1]: sshd@24-172.31.18.151:22-139.178.89.65:50320.service: Deactivated successfully. Dec 13 14:36:24.077815 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:36:24.079441 systemd-logind[1636]: Removed session 25. Dec 13 14:36:29.101083 systemd[1]: Started sshd@25-172.31.18.151:22-139.178.89.65:46562.service. Dec 13 14:36:29.282025 sshd[4431]: Accepted publickey for core from 139.178.89.65 port 46562 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:29.283858 sshd[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:29.291546 systemd-logind[1636]: New session 26 of user core. Dec 13 14:36:29.292527 systemd[1]: Started session-26.scope. Dec 13 14:36:29.498012 sshd[4431]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:29.501805 systemd[1]: sshd@25-172.31.18.151:22-139.178.89.65:46562.service: Deactivated successfully. Dec 13 14:36:29.502676 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:36:29.503401 systemd-logind[1636]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:36:29.504269 systemd-logind[1636]: Removed session 26. Dec 13 14:36:29.525621 systemd[1]: Started sshd@26-172.31.18.151:22-139.178.89.65:46566.service. Dec 13 14:36:29.688327 sshd[4443]: Accepted publickey for core from 139.178.89.65 port 46566 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:29.691415 sshd[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:29.699998 systemd[1]: Started session-27.scope. Dec 13 14:36:29.702906 systemd-logind[1636]: New session 27 of user core. Dec 13 14:36:32.057995 systemd[1]: run-containerd-runc-k8s.io-186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6-runc.z1ZqXJ.mount: Deactivated successfully. Dec 13 14:36:32.080618 env[1647]: time="2024-12-13T14:36:32.080572133Z" level=info msg="StopContainer for \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\" with timeout 30 (s)" Dec 13 14:36:32.082696 env[1647]: time="2024-12-13T14:36:32.082615413Z" level=info msg="Stop container \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\" with signal terminated" Dec 13 14:36:32.105177 systemd[1]: cri-containerd-f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8.scope: Deactivated successfully. Dec 13 14:36:32.115626 env[1647]: time="2024-12-13T14:36:32.115559379Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:36:32.127560 env[1647]: time="2024-12-13T14:36:32.127496076Z" level=info msg="StopContainer for \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\" with timeout 2 (s)" Dec 13 14:36:32.128480 env[1647]: time="2024-12-13T14:36:32.128358770Z" level=info msg="Stop container \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\" with signal terminated" Dec 13 14:36:32.140503 systemd-networkd[1370]: lxc_health: Link DOWN Dec 13 14:36:32.140515 systemd-networkd[1370]: lxc_health: Lost carrier Dec 13 14:36:32.142341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8-rootfs.mount: Deactivated successfully. Dec 13 14:36:32.236642 env[1647]: time="2024-12-13T14:36:32.236401830Z" level=info msg="shim disconnected" id=f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8 Dec 13 14:36:32.236642 env[1647]: time="2024-12-13T14:36:32.236447489Z" level=warning msg="cleaning up after shim disconnected" id=f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8 namespace=k8s.io Dec 13 14:36:32.236642 env[1647]: time="2024-12-13T14:36:32.236456495Z" level=info msg="cleaning up dead shim" Dec 13 14:36:32.260668 systemd[1]: cri-containerd-186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6.scope: Deactivated successfully. Dec 13 14:36:32.261035 systemd[1]: cri-containerd-186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6.scope: Consumed 9.200s CPU time. Dec 13 14:36:32.263357 env[1647]: time="2024-12-13T14:36:32.263325393Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4499 runtime=io.containerd.runc.v2\n" Dec 13 14:36:32.271912 env[1647]: time="2024-12-13T14:36:32.271870362Z" level=info msg="StopContainer for \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\" returns successfully" Dec 13 14:36:32.272917 env[1647]: time="2024-12-13T14:36:32.272873069Z" level=info msg="StopPodSandbox for \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\"" Dec 13 14:36:32.276092 env[1647]: time="2024-12-13T14:36:32.272952489Z" level=info msg="Container to stop \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:32.276013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6-shm.mount: Deactivated successfully. Dec 13 14:36:32.288072 systemd[1]: cri-containerd-37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6.scope: Deactivated successfully. Dec 13 14:36:32.315437 env[1647]: time="2024-12-13T14:36:32.315281675Z" level=info msg="shim disconnected" id=186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6 Dec 13 14:36:32.315437 env[1647]: time="2024-12-13T14:36:32.315338219Z" level=warning msg="cleaning up after shim disconnected" id=186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6 namespace=k8s.io Dec 13 14:36:32.315437 env[1647]: time="2024-12-13T14:36:32.315350263Z" level=info msg="cleaning up dead shim" Dec 13 14:36:32.330264 env[1647]: time="2024-12-13T14:36:32.330214581Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4546 runtime=io.containerd.runc.v2\n" Dec 13 14:36:32.332083 env[1647]: time="2024-12-13T14:36:32.332038089Z" level=info msg="shim disconnected" id=37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6 Dec 13 14:36:32.332464 env[1647]: time="2024-12-13T14:36:32.332087048Z" level=warning msg="cleaning up after shim disconnected" id=37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6 namespace=k8s.io Dec 13 14:36:32.332464 env[1647]: time="2024-12-13T14:36:32.332099240Z" level=info msg="cleaning up dead shim" Dec 13 14:36:32.333341 env[1647]: time="2024-12-13T14:36:32.333306636Z" level=info msg="StopContainer for \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\" returns successfully" Dec 13 14:36:32.334000 env[1647]: time="2024-12-13T14:36:32.333970777Z" level=info msg="StopPodSandbox for \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\"" Dec 13 14:36:32.334085 env[1647]: time="2024-12-13T14:36:32.334040534Z" level=info msg="Container to stop \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:32.334085 env[1647]: time="2024-12-13T14:36:32.334062625Z" level=info msg="Container to stop \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:32.334175 env[1647]: time="2024-12-13T14:36:32.334078541Z" level=info msg="Container to stop \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:32.334175 env[1647]: time="2024-12-13T14:36:32.334095208Z" level=info msg="Container to stop \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:32.334175 env[1647]: time="2024-12-13T14:36:32.334113949Z" level=info msg="Container to stop \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:32.345977 env[1647]: time="2024-12-13T14:36:32.345930450Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4559 runtime=io.containerd.runc.v2\n" Dec 13 14:36:32.346182 systemd[1]: cri-containerd-aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196.scope: Deactivated successfully. Dec 13 14:36:32.346828 env[1647]: time="2024-12-13T14:36:32.346790526Z" level=info msg="TearDown network for sandbox \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\" successfully" Dec 13 14:36:32.347096 env[1647]: time="2024-12-13T14:36:32.347063661Z" level=info msg="StopPodSandbox for \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\" returns successfully" Dec 13 14:36:32.375223 kubelet[2845]: I1213 14:36:32.374997 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4tq7\" (UniqueName: \"kubernetes.io/projected/0761bd5e-587c-4c12-96ed-46c409558955-kube-api-access-c4tq7\") pod \"0761bd5e-587c-4c12-96ed-46c409558955\" (UID: \"0761bd5e-587c-4c12-96ed-46c409558955\") " Dec 13 14:36:32.375223 kubelet[2845]: I1213 14:36:32.375080 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0761bd5e-587c-4c12-96ed-46c409558955-cilium-config-path\") pod \"0761bd5e-587c-4c12-96ed-46c409558955\" (UID: \"0761bd5e-587c-4c12-96ed-46c409558955\") " Dec 13 14:36:32.400605 kubelet[2845]: I1213 14:36:32.393456 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0761bd5e-587c-4c12-96ed-46c409558955-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0761bd5e-587c-4c12-96ed-46c409558955" (UID: "0761bd5e-587c-4c12-96ed-46c409558955"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:36:32.408412 env[1647]: time="2024-12-13T14:36:32.408271792Z" level=info msg="shim disconnected" id=aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196 Dec 13 14:36:32.408412 env[1647]: time="2024-12-13T14:36:32.408328353Z" level=warning msg="cleaning up after shim disconnected" id=aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196 namespace=k8s.io Dec 13 14:36:32.408412 env[1647]: time="2024-12-13T14:36:32.408342884Z" level=info msg="cleaning up dead shim" Dec 13 14:36:32.410666 kubelet[2845]: I1213 14:36:32.410615 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0761bd5e-587c-4c12-96ed-46c409558955-kube-api-access-c4tq7" (OuterVolumeSpecName: "kube-api-access-c4tq7") pod "0761bd5e-587c-4c12-96ed-46c409558955" (UID: "0761bd5e-587c-4c12-96ed-46c409558955"). InnerVolumeSpecName "kube-api-access-c4tq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:32.422294 env[1647]: time="2024-12-13T14:36:32.422247715Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4592 runtime=io.containerd.runc.v2\n" Dec 13 14:36:32.422660 env[1647]: time="2024-12-13T14:36:32.422625842Z" level=info msg="TearDown network for sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" successfully" Dec 13 14:36:32.422763 env[1647]: time="2024-12-13T14:36:32.422655273Z" level=info msg="StopPodSandbox for \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" returns successfully" Dec 13 14:36:32.476153 kubelet[2845]: I1213 14:36:32.476102 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-etc-cni-netd\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.476390 kubelet[2845]: I1213 14:36:32.476358 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-host-proc-sys-kernel\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.476945 kubelet[2845]: I1213 14:36:32.476416 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a9ba9b9-945a-4683-922e-5d87687737bf-clustermesh-secrets\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.476945 kubelet[2845]: I1213 14:36:32.476794 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-bpf-maps\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.476945 kubelet[2845]: I1213 14:36:32.476905 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-config-path\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.476945 kubelet[2845]: I1213 14:36:32.476933 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a9ba9b9-945a-4683-922e-5d87687737bf-hubble-tls\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.477347 kubelet[2845]: I1213 14:36:32.476953 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-host-proc-sys-net\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.477347 kubelet[2845]: I1213 14:36:32.476980 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-run\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.477347 kubelet[2845]: I1213 14:36:32.477004 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-lib-modules\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.477347 kubelet[2845]: I1213 14:36:32.477025 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-cgroup\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.477347 kubelet[2845]: I1213 14:36:32.477067 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggcvc\" (UniqueName: \"kubernetes.io/projected/7a9ba9b9-945a-4683-922e-5d87687737bf-kube-api-access-ggcvc\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.477347 kubelet[2845]: I1213 14:36:32.477276 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-xtables-lock\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.477577 kubelet[2845]: I1213 14:36:32.477309 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cni-path\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.477577 kubelet[2845]: I1213 14:36:32.477332 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-hostproc\") pod \"7a9ba9b9-945a-4683-922e-5d87687737bf\" (UID: \"7a9ba9b9-945a-4683-922e-5d87687737bf\") " Dec 13 14:36:32.477722 kubelet[2845]: I1213 14:36:32.477694 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.477796 kubelet[2845]: I1213 14:36:32.477766 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.477977 kubelet[2845]: I1213 14:36:32.477955 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.478429 kubelet[2845]: I1213 14:36:32.478276 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.478525 kubelet[2845]: I1213 14:36:32.478449 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.478525 kubelet[2845]: I1213 14:36:32.478472 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.478525 kubelet[2845]: I1213 14:36:32.478492 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.478525 kubelet[2845]: I1213 14:36:32.478513 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.479977 kubelet[2845]: I1213 14:36:32.479954 2845 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0761bd5e-587c-4c12-96ed-46c409558955-cilium-config-path\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.480158 kubelet[2845]: I1213 14:36:32.480141 2845 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c4tq7\" (UniqueName: \"kubernetes.io/projected/0761bd5e-587c-4c12-96ed-46c409558955-kube-api-access-c4tq7\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.480418 kubelet[2845]: I1213 14:36:32.480394 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-hostproc" (OuterVolumeSpecName: "hostproc") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.480617 kubelet[2845]: I1213 14:36:32.480598 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cni-path" (OuterVolumeSpecName: "cni-path") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.485711 kubelet[2845]: I1213 14:36:32.485672 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:36:32.486971 kubelet[2845]: I1213 14:36:32.486930 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9ba9b9-945a-4683-922e-5d87687737bf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:32.487201 kubelet[2845]: I1213 14:36:32.487168 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9ba9b9-945a-4683-922e-5d87687737bf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:36:32.488848 kubelet[2845]: I1213 14:36:32.488816 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9ba9b9-945a-4683-922e-5d87687737bf-kube-api-access-ggcvc" (OuterVolumeSpecName: "kube-api-access-ggcvc") pod "7a9ba9b9-945a-4683-922e-5d87687737bf" (UID: "7a9ba9b9-945a-4683-922e-5d87687737bf"). InnerVolumeSpecName "kube-api-access-ggcvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:32.582784 kubelet[2845]: I1213 14:36:32.580866 2845 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-xtables-lock\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.582784 kubelet[2845]: I1213 14:36:32.580911 2845 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cni-path\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.582784 kubelet[2845]: I1213 14:36:32.580925 2845 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-hostproc\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.582784 kubelet[2845]: I1213 14:36:32.581029 2845 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-host-proc-sys-kernel\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.582784 kubelet[2845]: I1213 14:36:32.581045 2845 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-etc-cni-netd\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.582784 kubelet[2845]: I1213 14:36:32.581056 2845 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a9ba9b9-945a-4683-922e-5d87687737bf-clustermesh-secrets\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.582784 kubelet[2845]: I1213 14:36:32.581067 2845 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-bpf-maps\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.582784 kubelet[2845]: I1213 14:36:32.581077 2845 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-config-path\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.583109 kubelet[2845]: I1213 14:36:32.581087 2845 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a9ba9b9-945a-4683-922e-5d87687737bf-hubble-tls\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.583109 kubelet[2845]: I1213 14:36:32.581097 2845 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-host-proc-sys-net\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.583109 kubelet[2845]: I1213 14:36:32.581107 2845 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-lib-modules\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.583109 kubelet[2845]: I1213 14:36:32.581118 2845 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-cgroup\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.583109 kubelet[2845]: I1213 14:36:32.581129 2845 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ggcvc\" (UniqueName: \"kubernetes.io/projected/7a9ba9b9-945a-4683-922e-5d87687737bf-kube-api-access-ggcvc\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.583109 kubelet[2845]: I1213 14:36:32.581140 2845 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a9ba9b9-945a-4683-922e-5d87687737bf-cilium-run\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:32.977641 systemd[1]: Removed slice kubepods-besteffort-pod0761bd5e_587c_4c12_96ed_46c409558955.slice. Dec 13 14:36:32.981537 kubelet[2845]: I1213 14:36:32.981465 2845 scope.go:117] "RemoveContainer" containerID="f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8" Dec 13 14:36:32.989411 env[1647]: time="2024-12-13T14:36:32.989011461Z" level=info msg="RemoveContainer for \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\"" Dec 13 14:36:32.997727 env[1647]: time="2024-12-13T14:36:32.997683287Z" level=info msg="RemoveContainer for \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\" returns successfully" Dec 13 14:36:33.001628 systemd[1]: Removed slice kubepods-burstable-pod7a9ba9b9_945a_4683_922e_5d87687737bf.slice. Dec 13 14:36:33.001751 systemd[1]: kubepods-burstable-pod7a9ba9b9_945a_4683_922e_5d87687737bf.slice: Consumed 9.341s CPU time. Dec 13 14:36:33.005221 kubelet[2845]: I1213 14:36:33.005032 2845 scope.go:117] "RemoveContainer" containerID="f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8" Dec 13 14:36:33.006062 env[1647]: time="2024-12-13T14:36:33.005969362Z" level=error msg="ContainerStatus for \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\": not found" Dec 13 14:36:33.019057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6-rootfs.mount: Deactivated successfully. Dec 13 14:36:33.019257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6-rootfs.mount: Deactivated successfully. Dec 13 14:36:33.019358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196-rootfs.mount: Deactivated successfully. Dec 13 14:36:33.019454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196-shm.mount: Deactivated successfully. Dec 13 14:36:33.019543 systemd[1]: var-lib-kubelet-pods-0761bd5e\x2d587c\x2d4c12\x2d96ed\x2d46c409558955-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc4tq7.mount: Deactivated successfully. Dec 13 14:36:33.019628 systemd[1]: var-lib-kubelet-pods-7a9ba9b9\x2d945a\x2d4683\x2d922e\x2d5d87687737bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dggcvc.mount: Deactivated successfully. Dec 13 14:36:33.019840 systemd[1]: var-lib-kubelet-pods-7a9ba9b9\x2d945a\x2d4683\x2d922e\x2d5d87687737bf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:36:33.019925 systemd[1]: var-lib-kubelet-pods-7a9ba9b9\x2d945a\x2d4683\x2d922e\x2d5d87687737bf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:36:33.032045 kubelet[2845]: E1213 14:36:33.031990 2845 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\": not found" containerID="f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8" Dec 13 14:36:33.034959 kubelet[2845]: I1213 14:36:33.034841 2845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8"} err="failed to get container status \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f67c62ac06b8bfa4a072b928d5e73e61173ab2566e706e08e054403b742caea8\": not found" Dec 13 14:36:33.034959 kubelet[2845]: I1213 14:36:33.034963 2845 scope.go:117] "RemoveContainer" containerID="186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6" Dec 13 14:36:33.043886 env[1647]: time="2024-12-13T14:36:33.043507062Z" level=info msg="RemoveContainer for \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\"" Dec 13 14:36:33.054757 env[1647]: time="2024-12-13T14:36:33.054548810Z" level=info msg="RemoveContainer for \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\" returns successfully" Dec 13 14:36:33.055241 kubelet[2845]: I1213 14:36:33.055172 2845 scope.go:117] "RemoveContainer" containerID="cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f" Dec 13 14:36:33.057862 env[1647]: time="2024-12-13T14:36:33.057425218Z" level=info msg="RemoveContainer for \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\"" Dec 13 14:36:33.062241 env[1647]: time="2024-12-13T14:36:33.062189464Z" level=info msg="RemoveContainer for \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\" returns successfully" Dec 13 14:36:33.062601 kubelet[2845]: I1213 14:36:33.062567 2845 scope.go:117] "RemoveContainer" containerID="18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611" Dec 13 14:36:33.064082 env[1647]: time="2024-12-13T14:36:33.064047862Z" level=info msg="RemoveContainer for \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\"" Dec 13 14:36:33.073350 env[1647]: time="2024-12-13T14:36:33.073301109Z" level=info msg="RemoveContainer for \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\" returns successfully" Dec 13 14:36:33.077605 kubelet[2845]: I1213 14:36:33.077572 2845 scope.go:117] "RemoveContainer" containerID="369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa" Dec 13 14:36:33.082392 env[1647]: time="2024-12-13T14:36:33.081914309Z" level=info msg="RemoveContainer for \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\"" Dec 13 14:36:33.087938 env[1647]: time="2024-12-13T14:36:33.087891247Z" level=info msg="RemoveContainer for \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\" returns successfully" Dec 13 14:36:33.088343 kubelet[2845]: I1213 14:36:33.088319 2845 scope.go:117] "RemoveContainer" containerID="c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780" Dec 13 14:36:33.090199 env[1647]: time="2024-12-13T14:36:33.090145922Z" level=info msg="RemoveContainer for \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\"" Dec 13 14:36:33.095356 env[1647]: time="2024-12-13T14:36:33.095310459Z" level=info msg="RemoveContainer for \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\" returns successfully" Dec 13 14:36:33.095783 kubelet[2845]: I1213 14:36:33.095757 2845 scope.go:117] "RemoveContainer" containerID="186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6" Dec 13 14:36:33.096292 env[1647]: time="2024-12-13T14:36:33.096110336Z" level=error msg="ContainerStatus for \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\": not found" Dec 13 14:36:33.096475 kubelet[2845]: E1213 14:36:33.096407 2845 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\": not found" containerID="186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6" Dec 13 14:36:33.096625 kubelet[2845]: I1213 14:36:33.096487 2845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6"} err="failed to get container status \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"186ae55de41c3c51f45c31a1ac99be010be0a0694833dd77b81ac3a75c8ab2a6\": not found" Dec 13 14:36:33.096625 kubelet[2845]: I1213 14:36:33.096519 2845 scope.go:117] "RemoveContainer" containerID="cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f" Dec 13 14:36:33.096785 env[1647]: time="2024-12-13T14:36:33.096718830Z" level=error msg="ContainerStatus for \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\": not found" Dec 13 14:36:33.096962 kubelet[2845]: E1213 14:36:33.096938 2845 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\": not found" containerID="cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f" Dec 13 14:36:33.097038 kubelet[2845]: I1213 14:36:33.096970 2845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f"} err="failed to get container status \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbe86ebaba2c63934f98fd2b8c393e03a65a96d4c29cc3e677da26fda7c06d8f\": not found" Dec 13 14:36:33.097038 kubelet[2845]: I1213 14:36:33.096991 2845 scope.go:117] "RemoveContainer" containerID="18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611" Dec 13 14:36:33.097441 env[1647]: time="2024-12-13T14:36:33.097319789Z" level=error msg="ContainerStatus for \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\": not found" Dec 13 14:36:33.097538 kubelet[2845]: E1213 14:36:33.097513 2845 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\": not found" containerID="18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611" Dec 13 14:36:33.097614 kubelet[2845]: I1213 14:36:33.097547 2845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611"} err="failed to get container status \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\": rpc error: code = NotFound desc = an error occurred when try to find container \"18f2c62e19a111d2c5ad4bad597715d0b513571b85d68c4520b9d2aac06c5611\": not found" Dec 13 14:36:33.097614 kubelet[2845]: I1213 14:36:33.097567 2845 scope.go:117] "RemoveContainer" containerID="369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa" Dec 13 14:36:33.097782 env[1647]: time="2024-12-13T14:36:33.097729614Z" level=error msg="ContainerStatus for \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\": not found" Dec 13 14:36:33.097927 kubelet[2845]: E1213 14:36:33.097895 2845 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\": not found" containerID="369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa" Dec 13 14:36:33.097997 kubelet[2845]: I1213 14:36:33.097928 2845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa"} err="failed to get container status \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"369eaf30452bb46cdfa323d63ab71c62917c7ce8ddceb0e313d50b18dc0941fa\": not found" Dec 13 14:36:33.097997 kubelet[2845]: I1213 14:36:33.097947 2845 scope.go:117] "RemoveContainer" containerID="c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780" Dec 13 14:36:33.098186 env[1647]: time="2024-12-13T14:36:33.098122434Z" level=error msg="ContainerStatus for \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\": not found" Dec 13 14:36:33.098326 kubelet[2845]: E1213 14:36:33.098277 2845 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\": not found" containerID="c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780" Dec 13 14:36:33.098326 kubelet[2845]: I1213 14:36:33.098304 2845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780"} err="failed to get container status \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\": rpc error: code = NotFound desc = an error occurred when try to find container \"c277f6aa5d5207e3762db52664b2bc0151f41b071f0392657104a7cb48022780\": not found" Dec 13 14:36:33.327145 kubelet[2845]: I1213 14:36:33.323282 2845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0761bd5e-587c-4c12-96ed-46c409558955" path="/var/lib/kubelet/pods/0761bd5e-587c-4c12-96ed-46c409558955/volumes" Dec 13 14:36:33.327145 kubelet[2845]: I1213 14:36:33.325303 2845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9ba9b9-945a-4683-922e-5d87687737bf" path="/var/lib/kubelet/pods/7a9ba9b9-945a-4683-922e-5d87687737bf/volumes" Dec 13 14:36:33.868173 sshd[4443]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:33.873799 systemd-logind[1636]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:36:33.874035 systemd[1]: sshd@26-172.31.18.151:22-139.178.89.65:46566.service: Deactivated successfully. Dec 13 14:36:33.875038 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:36:33.879616 systemd-logind[1636]: Removed session 27. Dec 13 14:36:33.895540 systemd[1]: Started sshd@27-172.31.18.151:22-139.178.89.65:46580.service. Dec 13 14:36:34.101406 sshd[4614]: Accepted publickey for core from 139.178.89.65 port 46580 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:34.106170 sshd[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:34.122183 systemd-logind[1636]: New session 28 of user core. Dec 13 14:36:34.124561 systemd[1]: Started session-28.scope. Dec 13 14:36:34.523968 kubelet[2845]: E1213 14:36:34.523865 2845 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:36:34.863024 sshd[4614]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:34.868164 systemd[1]: sshd@27-172.31.18.151:22-139.178.89.65:46580.service: Deactivated successfully. Dec 13 14:36:34.871371 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:36:34.872829 systemd-logind[1636]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:36:34.874450 systemd-logind[1636]: Removed session 28. Dec 13 14:36:34.883736 kubelet[2845]: I1213 14:36:34.883684 2845 topology_manager.go:215] "Topology Admit Handler" podUID="c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" podNamespace="kube-system" podName="cilium-8hzq8" Dec 13 14:36:34.884199 kubelet[2845]: E1213 14:36:34.883763 2845 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a9ba9b9-945a-4683-922e-5d87687737bf" containerName="cilium-agent" Dec 13 14:36:34.884199 kubelet[2845]: E1213 14:36:34.883776 2845 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a9ba9b9-945a-4683-922e-5d87687737bf" containerName="mount-cgroup" Dec 13 14:36:34.884199 kubelet[2845]: E1213 14:36:34.883786 2845 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a9ba9b9-945a-4683-922e-5d87687737bf" containerName="apply-sysctl-overwrites" Dec 13 14:36:34.884199 kubelet[2845]: E1213 14:36:34.883794 2845 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a9ba9b9-945a-4683-922e-5d87687737bf" containerName="mount-bpf-fs" Dec 13 14:36:34.884199 kubelet[2845]: E1213 14:36:34.883801 2845 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0761bd5e-587c-4c12-96ed-46c409558955" containerName="cilium-operator" Dec 13 14:36:34.884199 kubelet[2845]: E1213 14:36:34.883810 2845 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a9ba9b9-945a-4683-922e-5d87687737bf" containerName="clean-cilium-state" Dec 13 14:36:34.892532 kubelet[2845]: I1213 14:36:34.892491 2845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9ba9b9-945a-4683-922e-5d87687737bf" containerName="cilium-agent" Dec 13 14:36:34.892532 kubelet[2845]: I1213 14:36:34.892539 2845 memory_manager.go:354] "RemoveStaleState removing state" podUID="0761bd5e-587c-4c12-96ed-46c409558955" containerName="cilium-operator" Dec 13 14:36:34.902663 systemd[1]: Started sshd@28-172.31.18.151:22-139.178.89.65:46584.service. Dec 13 14:36:34.922302 systemd[1]: Created slice kubepods-burstable-podc0381796_ddc0_4b2a_9404_5e8c5f1cb0cc.slice. Dec 13 14:36:35.003540 kubelet[2845]: I1213 14:36:35.002979 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-run\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.003540 kubelet[2845]: I1213 14:36:35.003031 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-etc-cni-netd\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.003540 kubelet[2845]: I1213 14:36:35.003062 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-clustermesh-secrets\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.003540 kubelet[2845]: I1213 14:36:35.003086 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-host-proc-sys-net\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.003540 kubelet[2845]: I1213 14:36:35.003111 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-lib-modules\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.003540 kubelet[2845]: I1213 14:36:35.003132 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-ipsec-secrets\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.004645 kubelet[2845]: I1213 14:36:35.003154 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-hostproc\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.004645 kubelet[2845]: I1213 14:36:35.003175 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-hubble-tls\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.004645 kubelet[2845]: I1213 14:36:35.003195 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-cgroup\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.004645 kubelet[2845]: I1213 14:36:35.003216 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-config-path\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.004645 kubelet[2845]: I1213 14:36:35.003237 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-849qk\" (UniqueName: \"kubernetes.io/projected/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-kube-api-access-849qk\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.004645 kubelet[2845]: I1213 14:36:35.003261 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cni-path\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.005066 kubelet[2845]: I1213 14:36:35.003284 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-host-proc-sys-kernel\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.005066 kubelet[2845]: I1213 14:36:35.003307 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-bpf-maps\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.005066 kubelet[2845]: I1213 14:36:35.003334 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-xtables-lock\") pod \"cilium-8hzq8\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " pod="kube-system/cilium-8hzq8" Dec 13 14:36:35.091172 sshd[4625]: Accepted publickey for core from 139.178.89.65 port 46584 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:35.102318 sshd[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:35.139020 systemd[1]: Started session-29.scope. Dec 13 14:36:35.139616 systemd-logind[1636]: New session 29 of user core. Dec 13 14:36:35.231999 env[1647]: time="2024-12-13T14:36:35.231957950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8hzq8,Uid:c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc,Namespace:kube-system,Attempt:0,}" Dec 13 14:36:35.274065 env[1647]: time="2024-12-13T14:36:35.273981989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:36:35.274283 env[1647]: time="2024-12-13T14:36:35.274078404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:36:35.274283 env[1647]: time="2024-12-13T14:36:35.274110256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:36:35.274437 env[1647]: time="2024-12-13T14:36:35.274291492Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e pid=4647 runtime=io.containerd.runc.v2 Dec 13 14:36:35.317359 systemd[1]: Started cri-containerd-228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e.scope. Dec 13 14:36:35.378515 env[1647]: time="2024-12-13T14:36:35.378355098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8hzq8,Uid:c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\"" Dec 13 14:36:35.386451 env[1647]: time="2024-12-13T14:36:35.386327668Z" level=info msg="CreateContainer within sandbox \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:36:35.421291 env[1647]: time="2024-12-13T14:36:35.421208277Z" level=info msg="CreateContainer within sandbox \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb\"" Dec 13 14:36:35.422674 env[1647]: time="2024-12-13T14:36:35.422640993Z" level=info msg="StartContainer for \"ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb\"" Dec 13 14:36:35.464198 systemd[1]: Started cri-containerd-ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb.scope. Dec 13 14:36:35.483518 systemd[1]: cri-containerd-ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb.scope: Deactivated successfully. Dec 13 14:36:35.517819 sshd[4625]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:35.523111 systemd-logind[1636]: Session 29 logged out. Waiting for processes to exit. Dec 13 14:36:35.525363 systemd[1]: sshd@28-172.31.18.151:22-139.178.89.65:46584.service: Deactivated successfully. Dec 13 14:36:35.526582 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 14:36:35.530382 systemd-logind[1636]: Removed session 29. Dec 13 14:36:35.547692 env[1647]: time="2024-12-13T14:36:35.547633608Z" level=info msg="shim disconnected" id=ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb Dec 13 14:36:35.547903 env[1647]: time="2024-12-13T14:36:35.547878021Z" level=warning msg="cleaning up after shim disconnected" id=ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb namespace=k8s.io Dec 13 14:36:35.547990 env[1647]: time="2024-12-13T14:36:35.547959552Z" level=info msg="cleaning up dead shim" Dec 13 14:36:35.548669 systemd[1]: Started sshd@29-172.31.18.151:22-139.178.89.65:46596.service. Dec 13 14:36:35.590850 env[1647]: time="2024-12-13T14:36:35.590795025Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4706 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:36:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:36:35.591228 env[1647]: time="2024-12-13T14:36:35.591097933Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Dec 13 14:36:35.594489 env[1647]: time="2024-12-13T14:36:35.594431156Z" level=error msg="Failed to pipe stdout of container \"ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb\"" error="reading from a closed fifo" Dec 13 14:36:35.595122 env[1647]: time="2024-12-13T14:36:35.594969592Z" level=error msg="Failed to pipe stderr of container \"ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb\"" error="reading from a closed fifo" Dec 13 14:36:35.598765 env[1647]: time="2024-12-13T14:36:35.598682185Z" level=error msg="StartContainer for \"ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:36:35.599156 kubelet[2845]: E1213 14:36:35.598997 2845 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb" Dec 13 14:36:35.609341 kubelet[2845]: E1213 14:36:35.609296 2845 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:36:35.609341 kubelet[2845]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:36:35.609341 kubelet[2845]: rm /hostbin/cilium-mount Dec 13 14:36:35.609616 kubelet[2845]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-849qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-8hzq8_kube-system(c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:36:35.609914 kubelet[2845]: E1213 14:36:35.609872 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8hzq8" podUID="c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" Dec 13 14:36:35.744301 sshd[4705]: Accepted publickey for core from 139.178.89.65 port 46596 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:36:35.746141 sshd[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:35.753103 systemd[1]: Started session-30.scope. Dec 13 14:36:35.753968 systemd-logind[1636]: New session 30 of user core. Dec 13 14:36:36.010558 env[1647]: time="2024-12-13T14:36:36.010435327Z" level=info msg="StopPodSandbox for \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\"" Dec 13 14:36:36.010937 env[1647]: time="2024-12-13T14:36:36.010771799Z" level=info msg="Container to stop \"ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:36.031412 systemd[1]: cri-containerd-228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e.scope: Deactivated successfully. Dec 13 14:36:36.096208 env[1647]: time="2024-12-13T14:36:36.096154460Z" level=info msg="shim disconnected" id=228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e Dec 13 14:36:36.096208 env[1647]: time="2024-12-13T14:36:36.096205145Z" level=warning msg="cleaning up after shim disconnected" id=228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e namespace=k8s.io Dec 13 14:36:36.096208 env[1647]: time="2024-12-13T14:36:36.096218274Z" level=info msg="cleaning up dead shim" Dec 13 14:36:36.110964 env[1647]: time="2024-12-13T14:36:36.110910888Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4744 runtime=io.containerd.runc.v2\n" Dec 13 14:36:36.111296 env[1647]: time="2024-12-13T14:36:36.111261822Z" level=info msg="TearDown network for sandbox \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\" successfully" Dec 13 14:36:36.111416 env[1647]: time="2024-12-13T14:36:36.111292545Z" level=info msg="StopPodSandbox for \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\" returns successfully" Dec 13 14:36:36.132222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e-shm.mount: Deactivated successfully. Dec 13 14:36:36.214466 kubelet[2845]: I1213 14:36:36.214414 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-host-proc-sys-kernel\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214466 kubelet[2845]: I1213 14:36:36.214465 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-xtables-lock\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214495 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-ipsec-secrets\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214518 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-etc-cni-netd\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214539 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-hostproc\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214559 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-cgroup\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214580 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-run\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214607 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-hubble-tls\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214632 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-config-path\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214651 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cni-path\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214675 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-849qk\" (UniqueName: \"kubernetes.io/projected/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-kube-api-access-849qk\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214697 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-lib-modules\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214725 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-bpf-maps\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.214755 kubelet[2845]: I1213 14:36:36.214752 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-clustermesh-secrets\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.215284 kubelet[2845]: I1213 14:36:36.214775 2845 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-host-proc-sys-net\") pod \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\" (UID: \"c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc\") " Dec 13 14:36:36.215284 kubelet[2845]: I1213 14:36:36.214856 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.215284 kubelet[2845]: I1213 14:36:36.214891 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.215284 kubelet[2845]: I1213 14:36:36.214912 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.225592 kubelet[2845]: I1213 14:36:36.225538 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:36:36.225766 kubelet[2845]: I1213 14:36:36.225615 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.225766 kubelet[2845]: I1213 14:36:36.225640 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-hostproc" (OuterVolumeSpecName: "hostproc") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.225766 kubelet[2845]: I1213 14:36:36.225661 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.225766 kubelet[2845]: I1213 14:36:36.225682 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.227892 kubelet[2845]: I1213 14:36:36.227849 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cni-path" (OuterVolumeSpecName: "cni-path") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.228602 kubelet[2845]: I1213 14:36:36.227910 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.228602 kubelet[2845]: I1213 14:36:36.228316 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:36.236103 systemd[1]: var-lib-kubelet-pods-c0381796\x2dddc0\x2d4b2a\x2d9404\x2d5e8c5f1cb0cc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:36:36.239369 kubelet[2845]: I1213 14:36:36.239320 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:36.240519 systemd[1]: var-lib-kubelet-pods-c0381796\x2dddc0\x2d4b2a\x2d9404\x2d5e8c5f1cb0cc-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:36:36.247328 systemd[1]: var-lib-kubelet-pods-c0381796\x2dddc0\x2d4b2a\x2d9404\x2d5e8c5f1cb0cc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:36:36.248924 kubelet[2845]: I1213 14:36:36.248872 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:36:36.250033 kubelet[2845]: I1213 14:36:36.249996 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:36:36.254926 systemd[1]: var-lib-kubelet-pods-c0381796\x2dddc0\x2d4b2a\x2d9404\x2d5e8c5f1cb0cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d849qk.mount: Deactivated successfully. Dec 13 14:36:36.256859 kubelet[2845]: I1213 14:36:36.256809 2845 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-kube-api-access-849qk" (OuterVolumeSpecName: "kube-api-access-849qk") pod "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" (UID: "c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc"). InnerVolumeSpecName "kube-api-access-849qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:36.315112 kubelet[2845]: I1213 14:36:36.314991 2845 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-xtables-lock\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.315112 kubelet[2845]: I1213 14:36:36.315028 2845 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-ipsec-secrets\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.315112 kubelet[2845]: I1213 14:36:36.315044 2845 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-etc-cni-netd\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.315112 kubelet[2845]: I1213 14:36:36.315056 2845 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-hostproc\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.315112 kubelet[2845]: I1213 14:36:36.315067 2845 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-cgroup\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.315112 kubelet[2845]: I1213 14:36:36.315080 2845 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-run\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.315112 kubelet[2845]: I1213 14:36:36.315090 2845 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-hubble-tls\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.316693 kubelet[2845]: I1213 14:36:36.316660 2845 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cilium-config-path\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.316856 kubelet[2845]: I1213 14:36:36.316841 2845 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-cni-path\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.317251 kubelet[2845]: I1213 14:36:36.317227 2845 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-849qk\" (UniqueName: \"kubernetes.io/projected/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-kube-api-access-849qk\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.317330 kubelet[2845]: I1213 14:36:36.317248 2845 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-lib-modules\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.317330 kubelet[2845]: I1213 14:36:36.317264 2845 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-bpf-maps\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.317330 kubelet[2845]: I1213 14:36:36.317276 2845 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-clustermesh-secrets\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.317330 kubelet[2845]: I1213 14:36:36.317295 2845 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-host-proc-sys-net\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:36.317330 kubelet[2845]: I1213 14:36:36.317307 2845 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc-host-proc-sys-kernel\") on node \"ip-172-31-18-151\" DevicePath \"\"" Dec 13 14:36:37.015411 kubelet[2845]: I1213 14:36:37.015300 2845 scope.go:117] "RemoveContainer" containerID="ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb" Dec 13 14:36:37.021515 systemd[1]: Removed slice kubepods-burstable-podc0381796_ddc0_4b2a_9404_5e8c5f1cb0cc.slice. Dec 13 14:36:37.022708 env[1647]: time="2024-12-13T14:36:37.022302920Z" level=info msg="RemoveContainer for \"ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb\"" Dec 13 14:36:37.028611 env[1647]: time="2024-12-13T14:36:37.028559000Z" level=info msg="RemoveContainer for \"ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb\" returns successfully" Dec 13 14:36:37.077815 kubelet[2845]: I1213 14:36:37.077763 2845 topology_manager.go:215] "Topology Admit Handler" podUID="62a9fb29-7aad-4bd3-a638-066417c698e6" podNamespace="kube-system" podName="cilium-6mwrc" Dec 13 14:36:37.078173 kubelet[2845]: E1213 14:36:37.077841 2845 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" containerName="mount-cgroup" Dec 13 14:36:37.078173 kubelet[2845]: I1213 14:36:37.078028 2845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" containerName="mount-cgroup" Dec 13 14:36:37.086509 systemd[1]: Created slice kubepods-burstable-pod62a9fb29_7aad_4bd3_a638_066417c698e6.slice. Dec 13 14:36:37.122959 kubelet[2845]: I1213 14:36:37.122923 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62a9fb29-7aad-4bd3-a638-066417c698e6-hubble-tls\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123160 kubelet[2845]: I1213 14:36:37.122967 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/62a9fb29-7aad-4bd3-a638-066417c698e6-cilium-ipsec-secrets\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123160 kubelet[2845]: I1213 14:36:37.122995 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-hostproc\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123160 kubelet[2845]: I1213 14:36:37.123018 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-bpf-maps\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123160 kubelet[2845]: I1213 14:36:37.123040 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62a9fb29-7aad-4bd3-a638-066417c698e6-cilium-config-path\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123160 kubelet[2845]: I1213 14:36:37.123061 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-cilium-cgroup\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123160 kubelet[2845]: I1213 14:36:37.123096 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-etc-cni-netd\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123160 kubelet[2845]: I1213 14:36:37.123118 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-lib-modules\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123160 kubelet[2845]: I1213 14:36:37.123139 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-xtables-lock\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123160 kubelet[2845]: I1213 14:36:37.123158 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62a9fb29-7aad-4bd3-a638-066417c698e6-clustermesh-secrets\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123537 kubelet[2845]: I1213 14:36:37.123182 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-host-proc-sys-net\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123537 kubelet[2845]: I1213 14:36:37.123208 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-cni-path\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123537 kubelet[2845]: I1213 14:36:37.123233 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-host-proc-sys-kernel\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123537 kubelet[2845]: I1213 14:36:37.123269 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62a9fb29-7aad-4bd3-a638-066417c698e6-cilium-run\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.123537 kubelet[2845]: I1213 14:36:37.123293 2845 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djzv\" (UniqueName: \"kubernetes.io/projected/62a9fb29-7aad-4bd3-a638-066417c698e6-kube-api-access-7djzv\") pod \"cilium-6mwrc\" (UID: \"62a9fb29-7aad-4bd3-a638-066417c698e6\") " pod="kube-system/cilium-6mwrc" Dec 13 14:36:37.323901 kubelet[2845]: I1213 14:36:37.323789 2845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc" path="/var/lib/kubelet/pods/c0381796-ddc0-4b2a-9404-5e8c5f1cb0cc/volumes" Dec 13 14:36:37.396571 env[1647]: time="2024-12-13T14:36:37.396239185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mwrc,Uid:62a9fb29-7aad-4bd3-a638-066417c698e6,Namespace:kube-system,Attempt:0,}" Dec 13 14:36:37.459119 env[1647]: time="2024-12-13T14:36:37.458945447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:36:37.459119 env[1647]: time="2024-12-13T14:36:37.459060848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:36:37.459449 env[1647]: time="2024-12-13T14:36:37.459100895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:36:37.459874 env[1647]: time="2024-12-13T14:36:37.459815939Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3 pid=4774 runtime=io.containerd.runc.v2 Dec 13 14:36:37.489018 systemd[1]: Started cri-containerd-5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3.scope. Dec 13 14:36:37.521512 env[1647]: time="2024-12-13T14:36:37.521470697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mwrc,Uid:62a9fb29-7aad-4bd3-a638-066417c698e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\"" Dec 13 14:36:37.526766 env[1647]: time="2024-12-13T14:36:37.526730641Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:36:37.545027 env[1647]: time="2024-12-13T14:36:37.544972597Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e55558c3480f8ba8b616694c75d80f4b9391751fead85b3799db10bb55aa9484\"" Dec 13 14:36:37.546937 env[1647]: time="2024-12-13T14:36:37.546903769Z" level=info msg="StartContainer for \"e55558c3480f8ba8b616694c75d80f4b9391751fead85b3799db10bb55aa9484\"" Dec 13 14:36:37.568626 systemd[1]: Started cri-containerd-e55558c3480f8ba8b616694c75d80f4b9391751fead85b3799db10bb55aa9484.scope. Dec 13 14:36:37.633146 env[1647]: time="2024-12-13T14:36:37.632537978Z" level=info msg="StartContainer for \"e55558c3480f8ba8b616694c75d80f4b9391751fead85b3799db10bb55aa9484\" returns successfully" Dec 13 14:36:37.650345 systemd[1]: cri-containerd-e55558c3480f8ba8b616694c75d80f4b9391751fead85b3799db10bb55aa9484.scope: Deactivated successfully. Dec 13 14:36:37.701794 env[1647]: time="2024-12-13T14:36:37.701740506Z" level=info msg="shim disconnected" id=e55558c3480f8ba8b616694c75d80f4b9391751fead85b3799db10bb55aa9484 Dec 13 14:36:37.702117 env[1647]: time="2024-12-13T14:36:37.702093169Z" level=warning msg="cleaning up after shim disconnected" id=e55558c3480f8ba8b616694c75d80f4b9391751fead85b3799db10bb55aa9484 namespace=k8s.io Dec 13 14:36:37.702284 env[1647]: time="2024-12-13T14:36:37.702258851Z" level=info msg="cleaning up dead shim" Dec 13 14:36:37.716848 env[1647]: time="2024-12-13T14:36:37.716793656Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4859 runtime=io.containerd.runc.v2\n" Dec 13 14:36:38.023514 env[1647]: time="2024-12-13T14:36:38.022347591Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:36:38.058742 env[1647]: time="2024-12-13T14:36:38.058674937Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4548b68edf4219303f15e6e9ba49dd9e2daf47c6b4db4259597d95397b2c528b\"" Dec 13 14:36:38.059728 env[1647]: time="2024-12-13T14:36:38.059699322Z" level=info msg="StartContainer for \"4548b68edf4219303f15e6e9ba49dd9e2daf47c6b4db4259597d95397b2c528b\"" Dec 13 14:36:38.082629 systemd[1]: Started cri-containerd-4548b68edf4219303f15e6e9ba49dd9e2daf47c6b4db4259597d95397b2c528b.scope. Dec 13 14:36:38.118259 env[1647]: time="2024-12-13T14:36:38.118196971Z" level=info msg="StartContainer for \"4548b68edf4219303f15e6e9ba49dd9e2daf47c6b4db4259597d95397b2c528b\" returns successfully" Dec 13 14:36:38.133039 systemd[1]: cri-containerd-4548b68edf4219303f15e6e9ba49dd9e2daf47c6b4db4259597d95397b2c528b.scope: Deactivated successfully. Dec 13 14:36:38.184415 env[1647]: time="2024-12-13T14:36:38.184343531Z" level=info msg="shim disconnected" id=4548b68edf4219303f15e6e9ba49dd9e2daf47c6b4db4259597d95397b2c528b Dec 13 14:36:38.184746 env[1647]: time="2024-12-13T14:36:38.184438805Z" level=warning msg="cleaning up after shim disconnected" id=4548b68edf4219303f15e6e9ba49dd9e2daf47c6b4db4259597d95397b2c528b namespace=k8s.io Dec 13 14:36:38.184746 env[1647]: time="2024-12-13T14:36:38.184452669Z" level=info msg="cleaning up dead shim" Dec 13 14:36:38.194518 env[1647]: time="2024-12-13T14:36:38.194473146Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4919 runtime=io.containerd.runc.v2\n" Dec 13 14:36:38.319785 kubelet[2845]: E1213 14:36:38.319645 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-rttqf" podUID="58c0bd14-ab1d-4658-8a37-994d69630c96" Dec 13 14:36:38.698734 kubelet[2845]: W1213 14:36:38.694590 2845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0381796_ddc0_4b2a_9404_5e8c5f1cb0cc.slice/cri-containerd-ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb.scope WatchSource:0}: container "ec4866cf5c3b7dfd0f6a9061857fa09863d32025e541e1e74d8e234ab16fd5bb" in namespace "k8s.io": not found Dec 13 14:36:39.032366 env[1647]: time="2024-12-13T14:36:39.032251015Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:36:39.089574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537026179.mount: Deactivated successfully. Dec 13 14:36:39.092141 env[1647]: time="2024-12-13T14:36:39.092090116Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66\"" Dec 13 14:36:39.093650 env[1647]: time="2024-12-13T14:36:39.093613426Z" level=info msg="StartContainer for \"1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66\"" Dec 13 14:36:39.152008 systemd[1]: Started cri-containerd-1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66.scope. Dec 13 14:36:39.197407 env[1647]: time="2024-12-13T14:36:39.197286723Z" level=info msg="StartContainer for \"1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66\" returns successfully" Dec 13 14:36:39.207067 systemd[1]: cri-containerd-1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66.scope: Deactivated successfully. Dec 13 14:36:39.252554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66-rootfs.mount: Deactivated successfully. Dec 13 14:36:39.254468 env[1647]: time="2024-12-13T14:36:39.254413923Z" level=info msg="shim disconnected" id=1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66 Dec 13 14:36:39.254646 env[1647]: time="2024-12-13T14:36:39.254467186Z" level=warning msg="cleaning up after shim disconnected" id=1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66 namespace=k8s.io Dec 13 14:36:39.254646 env[1647]: time="2024-12-13T14:36:39.254480744Z" level=info msg="cleaning up dead shim" Dec 13 14:36:39.265000 env[1647]: time="2024-12-13T14:36:39.264948411Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4975 runtime=io.containerd.runc.v2\n" Dec 13 14:36:39.528161 kubelet[2845]: E1213 14:36:39.528106 2845 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:36:40.064822 env[1647]: time="2024-12-13T14:36:40.064733327Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:36:40.089243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773006214.mount: Deactivated successfully. Dec 13 14:36:40.139633 env[1647]: time="2024-12-13T14:36:40.139577710Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828\"" Dec 13 14:36:40.141901 env[1647]: time="2024-12-13T14:36:40.141854217Z" level=info msg="StartContainer for \"297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828\"" Dec 13 14:36:40.182097 systemd[1]: Started cri-containerd-297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828.scope. Dec 13 14:36:40.236631 systemd[1]: cri-containerd-297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828.scope: Deactivated successfully. Dec 13 14:36:40.238654 env[1647]: time="2024-12-13T14:36:40.238483630Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a9fb29_7aad_4bd3_a638_066417c698e6.slice/cri-containerd-297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828.scope/memory.events\": no such file or directory" Dec 13 14:36:40.248368 env[1647]: time="2024-12-13T14:36:40.247976734Z" level=info msg="StartContainer for \"297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828\" returns successfully" Dec 13 14:36:40.287491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828-rootfs.mount: Deactivated successfully. Dec 13 14:36:40.307864 env[1647]: time="2024-12-13T14:36:40.307807652Z" level=info msg="shim disconnected" id=297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828 Dec 13 14:36:40.307864 env[1647]: time="2024-12-13T14:36:40.307852683Z" level=warning msg="cleaning up after shim disconnected" id=297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828 namespace=k8s.io Dec 13 14:36:40.307864 env[1647]: time="2024-12-13T14:36:40.307867777Z" level=info msg="cleaning up dead shim" Dec 13 14:36:40.320639 kubelet[2845]: E1213 14:36:40.319744 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-rttqf" podUID="58c0bd14-ab1d-4658-8a37-994d69630c96" Dec 13 14:36:40.322002 env[1647]: time="2024-12-13T14:36:40.321955300Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5032 runtime=io.containerd.runc.v2\n" Dec 13 14:36:41.067190 env[1647]: time="2024-12-13T14:36:41.065644906Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:36:41.107447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1868595669.mount: Deactivated successfully. Dec 13 14:36:41.116549 env[1647]: time="2024-12-13T14:36:41.116488879Z" level=info msg="CreateContainer within sandbox \"5b76442517c7d40a7e7ae3242700434d203ab6c66705388d130eb417029024d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0022c1a8854b772b07c02cd5a738c14a1917286b06883ec6cf4e00ff18da6d41\"" Dec 13 14:36:41.117361 env[1647]: time="2024-12-13T14:36:41.117324979Z" level=info msg="StartContainer for \"0022c1a8854b772b07c02cd5a738c14a1917286b06883ec6cf4e00ff18da6d41\"" Dec 13 14:36:41.146294 systemd[1]: Started cri-containerd-0022c1a8854b772b07c02cd5a738c14a1917286b06883ec6cf4e00ff18da6d41.scope. Dec 13 14:36:41.203129 env[1647]: time="2024-12-13T14:36:41.203083250Z" level=info msg="StartContainer for \"0022c1a8854b772b07c02cd5a738c14a1917286b06883ec6cf4e00ff18da6d41\" returns successfully" Dec 13 14:36:41.819849 kubelet[2845]: W1213 14:36:41.819798 2845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a9fb29_7aad_4bd3_a638_066417c698e6.slice/cri-containerd-e55558c3480f8ba8b616694c75d80f4b9391751fead85b3799db10bb55aa9484.scope WatchSource:0}: task e55558c3480f8ba8b616694c75d80f4b9391751fead85b3799db10bb55aa9484 not found: not found Dec 13 14:36:42.320332 kubelet[2845]: E1213 14:36:42.320277 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-rttqf" podUID="58c0bd14-ab1d-4658-8a37-994d69630c96" Dec 13 14:36:42.326838 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:36:43.457490 kubelet[2845]: I1213 14:36:43.457435 2845 setters.go:580] "Node became not ready" node="ip-172-31-18-151" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:36:43Z","lastTransitionTime":"2024-12-13T14:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:36:44.319734 kubelet[2845]: E1213 14:36:44.319682 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-rttqf" podUID="58c0bd14-ab1d-4658-8a37-994d69630c96" Dec 13 14:36:44.619114 systemd[1]: run-containerd-runc-k8s.io-0022c1a8854b772b07c02cd5a738c14a1917286b06883ec6cf4e00ff18da6d41-runc.zZ6n9Z.mount: Deactivated successfully. Dec 13 14:36:44.938548 kubelet[2845]: W1213 14:36:44.938269 2845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a9fb29_7aad_4bd3_a638_066417c698e6.slice/cri-containerd-4548b68edf4219303f15e6e9ba49dd9e2daf47c6b4db4259597d95397b2c528b.scope WatchSource:0}: task 4548b68edf4219303f15e6e9ba49dd9e2daf47c6b4db4259597d95397b2c528b not found: not found Dec 13 14:36:45.915517 (udev-worker)[5597]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:36:45.917776 systemd-networkd[1370]: lxc_health: Link UP Dec 13 14:36:45.922709 (udev-worker)[5599]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:36:45.937708 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:36:45.936675 systemd-networkd[1370]: lxc_health: Gained carrier Dec 13 14:36:47.408979 systemd-networkd[1370]: lxc_health: Gained IPv6LL Dec 13 14:36:47.530747 kubelet[2845]: I1213 14:36:47.530678 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6mwrc" podStartSLOduration=10.530655503 podStartE2EDuration="10.530655503s" podCreationTimestamp="2024-12-13 14:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:36:42.139253477 +0000 UTC m=+143.049550924" watchObservedRunningTime="2024-12-13 14:36:47.530655503 +0000 UTC m=+148.440952966" Dec 13 14:36:48.052077 kubelet[2845]: W1213 14:36:48.052031 2845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a9fb29_7aad_4bd3_a638_066417c698e6.slice/cri-containerd-1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66.scope WatchSource:0}: task 1eed076e72c771483dca14d03bdeea6f89b4fe4ee4281cc0a6b2e81829ba0c66 not found: not found Dec 13 14:36:51.170244 kubelet[2845]: W1213 14:36:51.170057 2845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a9fb29_7aad_4bd3_a638_066417c698e6.slice/cri-containerd-297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828.scope WatchSource:0}: task 297f2d310f74b82ecbf2d0e67c7ed22c017d8f14978a97f7da923263cc59d828 not found: not found Dec 13 14:36:51.614496 sshd[4705]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:51.619451 systemd-logind[1636]: Session 30 logged out. Waiting for processes to exit. Dec 13 14:36:51.619684 systemd[1]: sshd@29-172.31.18.151:22-139.178.89.65:46596.service: Deactivated successfully. Dec 13 14:36:51.621066 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 14:36:51.622421 systemd-logind[1636]: Removed session 30. Dec 13 14:37:07.763902 systemd[1]: cri-containerd-350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8.scope: Deactivated successfully. Dec 13 14:37:07.765891 systemd[1]: cri-containerd-350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8.scope: Consumed 3.459s CPU time. Dec 13 14:37:07.836058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8-rootfs.mount: Deactivated successfully. Dec 13 14:37:07.851070 env[1647]: time="2024-12-13T14:37:07.850976581Z" level=info msg="shim disconnected" id=350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8 Dec 13 14:37:07.851070 env[1647]: time="2024-12-13T14:37:07.851049517Z" level=warning msg="cleaning up after shim disconnected" id=350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8 namespace=k8s.io Dec 13 14:37:07.851070 env[1647]: time="2024-12-13T14:37:07.851065290Z" level=info msg="cleaning up dead shim" Dec 13 14:37:07.876217 env[1647]: time="2024-12-13T14:37:07.876174503Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:37:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5712 runtime=io.containerd.runc.v2\n" Dec 13 14:37:08.163678 kubelet[2845]: I1213 14:37:08.162843 2845 scope.go:117] "RemoveContainer" containerID="350e66048e5a41816e67d95dac85a2ce545981d619e4f7e2af0bbeaee78cc7a8" Dec 13 14:37:08.182654 env[1647]: time="2024-12-13T14:37:08.182605799Z" level=info msg="CreateContainer within sandbox \"344886c71981a1cbf879ae7ad55947907e325ea18a7d3108305a7aafa4c8e531\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:37:08.206325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount884676849.mount: Deactivated successfully. Dec 13 14:37:08.214400 env[1647]: time="2024-12-13T14:37:08.214326794Z" level=info msg="CreateContainer within sandbox \"344886c71981a1cbf879ae7ad55947907e325ea18a7d3108305a7aafa4c8e531\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"910a763f71b7ad676ce9f864e867e8636fa0356f452b3b8b8aa6fddbf2a7c5a7\"" Dec 13 14:37:08.215273 env[1647]: time="2024-12-13T14:37:08.215239186Z" level=info msg="StartContainer for \"910a763f71b7ad676ce9f864e867e8636fa0356f452b3b8b8aa6fddbf2a7c5a7\"" Dec 13 14:37:08.251056 systemd[1]: Started cri-containerd-910a763f71b7ad676ce9f864e867e8636fa0356f452b3b8b8aa6fddbf2a7c5a7.scope. Dec 13 14:37:08.343027 env[1647]: time="2024-12-13T14:37:08.342971678Z" level=info msg="StartContainer for \"910a763f71b7ad676ce9f864e867e8636fa0356f452b3b8b8aa6fddbf2a7c5a7\" returns successfully" Dec 13 14:37:12.649346 kubelet[2845]: E1213 14:37:12.649286 2845 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:37:13.951499 systemd[1]: cri-containerd-614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce.scope: Deactivated successfully. Dec 13 14:37:13.952079 systemd[1]: cri-containerd-614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce.scope: Consumed 1.830s CPU time. Dec 13 14:37:13.989634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce-rootfs.mount: Deactivated successfully. Dec 13 14:37:14.018400 env[1647]: time="2024-12-13T14:37:14.017831521Z" level=info msg="shim disconnected" id=614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce Dec 13 14:37:14.018400 env[1647]: time="2024-12-13T14:37:14.017921073Z" level=warning msg="cleaning up after shim disconnected" id=614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce namespace=k8s.io Dec 13 14:37:14.019021 env[1647]: time="2024-12-13T14:37:14.018420815Z" level=info msg="cleaning up dead shim" Dec 13 14:37:14.065690 env[1647]: time="2024-12-13T14:37:14.065539411Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:37:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5775 runtime=io.containerd.runc.v2\n" Dec 13 14:37:14.183495 kubelet[2845]: I1213 14:37:14.183464 2845 scope.go:117] "RemoveContainer" containerID="614ac2ee7d69c77d6b4ce5f4b9a57893ec14de94986fd2a3b9d273082de0fcce" Dec 13 14:37:14.187707 env[1647]: time="2024-12-13T14:37:14.187634382Z" level=info msg="CreateContainer within sandbox \"41756fc694d20c88790193474d2ccc25556b3abd413044c37f54a1a329853689\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:37:14.232791 env[1647]: time="2024-12-13T14:37:14.232589336Z" level=info msg="CreateContainer within sandbox \"41756fc694d20c88790193474d2ccc25556b3abd413044c37f54a1a329853689\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"42329e8dff5205c2c58cee882775350a5955ee5f42d2119245f86e360933bf36\"" Dec 13 14:37:14.239017 env[1647]: time="2024-12-13T14:37:14.238940125Z" level=info msg="StartContainer for \"42329e8dff5205c2c58cee882775350a5955ee5f42d2119245f86e360933bf36\"" Dec 13 14:37:14.292304 systemd[1]: Started cri-containerd-42329e8dff5205c2c58cee882775350a5955ee5f42d2119245f86e360933bf36.scope. Dec 13 14:37:14.394281 env[1647]: time="2024-12-13T14:37:14.394219364Z" level=info msg="StartContainer for \"42329e8dff5205c2c58cee882775350a5955ee5f42d2119245f86e360933bf36\" returns successfully" Dec 13 14:37:14.989844 systemd[1]: run-containerd-runc-k8s.io-42329e8dff5205c2c58cee882775350a5955ee5f42d2119245f86e360933bf36-runc.3TekcQ.mount: Deactivated successfully. Dec 13 14:37:19.375982 env[1647]: time="2024-12-13T14:37:19.375939607Z" level=info msg="StopPodSandbox for \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\"" Dec 13 14:37:19.376468 env[1647]: time="2024-12-13T14:37:19.376043941Z" level=info msg="TearDown network for sandbox \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\" successfully" Dec 13 14:37:19.376468 env[1647]: time="2024-12-13T14:37:19.376089074Z" level=info msg="StopPodSandbox for \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\" returns successfully" Dec 13 14:37:19.376674 env[1647]: time="2024-12-13T14:37:19.376647578Z" level=info msg="RemovePodSandbox for \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\"" Dec 13 14:37:19.376757 env[1647]: time="2024-12-13T14:37:19.376682452Z" level=info msg="Forcibly stopping sandbox \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\"" Dec 13 14:37:19.376808 env[1647]: time="2024-12-13T14:37:19.376783200Z" level=info msg="TearDown network for sandbox \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\" successfully" Dec 13 14:37:19.390629 env[1647]: time="2024-12-13T14:37:19.390580489Z" level=info msg="RemovePodSandbox \"37958201e7ee578281f1cf02a69c583f01a0552290f0d9623f679be773ff20c6\" returns successfully" Dec 13 14:37:19.391717 env[1647]: time="2024-12-13T14:37:19.391671986Z" level=info msg="StopPodSandbox for \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\"" Dec 13 14:37:19.391851 env[1647]: time="2024-12-13T14:37:19.391779948Z" level=info msg="TearDown network for sandbox \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\" successfully" Dec 13 14:37:19.391851 env[1647]: time="2024-12-13T14:37:19.391831099Z" level=info msg="StopPodSandbox for \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\" returns successfully" Dec 13 14:37:19.393049 env[1647]: time="2024-12-13T14:37:19.393010634Z" level=info msg="RemovePodSandbox for \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\"" Dec 13 14:37:19.393175 env[1647]: time="2024-12-13T14:37:19.393043321Z" level=info msg="Forcibly stopping sandbox \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\"" Dec 13 14:37:19.393175 env[1647]: time="2024-12-13T14:37:19.393152268Z" level=info msg="TearDown network for sandbox \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\" successfully" Dec 13 14:37:19.401027 env[1647]: time="2024-12-13T14:37:19.400507223Z" level=info msg="RemovePodSandbox \"228191a0ac62157af0b002fdb426e6f86cf095a787a5a4b22219e43473d8e69e\" returns successfully" Dec 13 14:37:19.402063 env[1647]: time="2024-12-13T14:37:19.402031882Z" level=info msg="StopPodSandbox for \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\"" Dec 13 14:37:19.402219 env[1647]: time="2024-12-13T14:37:19.402161639Z" level=info msg="TearDown network for sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" successfully" Dec 13 14:37:19.402280 env[1647]: time="2024-12-13T14:37:19.402216327Z" level=info msg="StopPodSandbox for \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" returns successfully" Dec 13 14:37:19.403308 env[1647]: time="2024-12-13T14:37:19.402851096Z" level=info msg="RemovePodSandbox for \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\"" Dec 13 14:37:19.403463 env[1647]: time="2024-12-13T14:37:19.403303775Z" level=info msg="Forcibly stopping sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\"" Dec 13 14:37:19.403527 env[1647]: time="2024-12-13T14:37:19.403464462Z" level=info msg="TearDown network for sandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" successfully" Dec 13 14:37:19.419877 env[1647]: time="2024-12-13T14:37:19.419791232Z" level=info msg="RemovePodSandbox \"aa0443327707b63289f60e0b0ae82bb08afe934f6a821201ba1cbe5f157f3196\" returns successfully" Dec 13 14:37:22.650166 kubelet[2845]: E1213 14:37:22.650108 2845 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-151?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"