Jan 17 12:12:21.155250 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025
Jan 17 12:12:21.155298 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e
Jan 17 12:12:21.155315 kernel: BIOS-provided physical RAM map:
Jan 17 12:12:21.155327 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 17 12:12:21.155339 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 17 12:12:21.155351 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 17 12:12:21.155369 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable
Jan 17 12:12:21.155381 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved
Jan 17 12:12:21.155394 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
Jan 17 12:12:21.155407 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 17 12:12:21.155420 kernel: NX (Execute Disable) protection: active
Jan 17 12:12:21.155432 kernel: APIC: Static calls initialized
Jan 17 12:12:21.155445 kernel: SMBIOS 2.7 present.
Jan 17 12:12:21.155458 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017
Jan 17 12:12:21.155477 kernel: Hypervisor detected: KVM
Jan 17 12:12:21.155492 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 17 12:12:21.155506 kernel: kvm-clock: using sched offset of 5749844533 cycles
Jan 17 12:12:21.155522 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 17 12:12:21.155536 kernel: tsc: Detected 2499.998 MHz processor
Jan 17 12:12:21.155551 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 17 12:12:21.156613 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 17 12:12:21.156634 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000
Jan 17 12:12:21.156650 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 17 12:12:21.156664 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 17 12:12:21.156679 kernel: Using GB pages for direct mapping
Jan 17 12:12:21.156693 kernel: ACPI: Early table checksum verification disabled
Jan 17 12:12:21.156707 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON)
Jan 17 12:12:21.156721 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
Jan 17 12:12:21.156735 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
Jan 17 12:12:21.156750 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
Jan 17 12:12:21.156766 kernel: ACPI: FACS 0x000000007D9EFF40 000040
Jan 17 12:12:21.156781 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Jan 17 12:12:21.156795 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Jan 17 12:12:21.156809 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
Jan 17 12:12:21.156824 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Jan 17 12:12:21.156838 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
Jan 17 12:12:21.156853 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001)
Jan 17 12:12:21.156867 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Jan 17 12:12:21.156882 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3]
Jan 17 12:12:21.156900 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488]
Jan 17 12:12:21.156920 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f]
Jan 17 12:12:21.156935 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39]
Jan 17 12:12:21.156950 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645]
Jan 17 12:12:21.156966 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf]
Jan 17 12:12:21.156984 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b]
Jan 17 12:12:21.157000 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7]
Jan 17 12:12:21.157015 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037]
Jan 17 12:12:21.157030 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba]
Jan 17 12:12:21.157046 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Jan 17 12:12:21.157061 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Jan 17 12:12:21.157077 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
Jan 17 12:12:21.157092 kernel: NUMA: Initialized distance table, cnt=1
Jan 17 12:12:21.157106 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff]
Jan 17 12:12:21.157124 kernel: Zone ranges:
Jan 17 12:12:21.157140 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 17 12:12:21.157155 kernel:   DMA32    [mem 0x0000000001000000-0x000000007d9e9fff]
Jan 17 12:12:21.157170 kernel:   Normal   empty
Jan 17 12:12:21.157186 kernel: Movable zone start for each node
Jan 17 12:12:21.157201 kernel: Early memory node ranges
Jan 17 12:12:21.157217 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 17 12:12:21.157232 kernel:   node   0: [mem 0x0000000000100000-0x000000007d9e9fff]
Jan 17 12:12:21.157247 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff]
Jan 17 12:12:21.157266 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 17 12:12:21.157280 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 17 12:12:21.157295 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges
Jan 17 12:12:21.157310 kernel: ACPI: PM-Timer IO Port: 0xb008
Jan 17 12:12:21.157326 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 17 12:12:21.157341 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
Jan 17 12:12:21.157356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 17 12:12:21.157372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 17 12:12:21.157387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 17 12:12:21.157402 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 17 12:12:21.157420 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 17 12:12:21.157435 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Jan 17 12:12:21.157451 kernel: TSC deadline timer available
Jan 17 12:12:21.157466 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Jan 17 12:12:21.157481 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 17 12:12:21.157496 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices
Jan 17 12:12:21.157511 kernel: Booting paravirtualized kernel on KVM
Jan 17 12:12:21.157527 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 17 12:12:21.158617 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Jan 17 12:12:21.158636 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Jan 17 12:12:21.158650 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Jan 17 12:12:21.158664 kernel: pcpu-alloc: [0] 0 1 
Jan 17 12:12:21.158679 kernel: kvm-guest: PV spinlocks enabled
Jan 17 12:12:21.158694 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Jan 17 12:12:21.158710 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e
Jan 17 12:12:21.158724 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 17 12:12:21.158738 kernel: random: crng init done
Jan 17 12:12:21.158756 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jan 17 12:12:21.158771 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 17 12:12:21.158785 kernel: Fallback order for Node 0: 0 
Jan 17 12:12:21.158799 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 506242
Jan 17 12:12:21.158812 kernel: Policy zone: DMA32
Jan 17 12:12:21.158826 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 17 12:12:21.158841 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125152K reserved, 0K cma-reserved)
Jan 17 12:12:21.158854 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Jan 17 12:12:21.158869 kernel: Kernel/User page tables isolation: enabled
Jan 17 12:12:21.158887 kernel: ftrace: allocating 37918 entries in 149 pages
Jan 17 12:12:21.158901 kernel: ftrace: allocated 149 pages with 4 groups
Jan 17 12:12:21.158916 kernel: Dynamic Preempt: voluntary
Jan 17 12:12:21.158931 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 17 12:12:21.159092 kernel: rcu:         RCU event tracing is enabled.
Jan 17 12:12:21.159117 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Jan 17 12:12:21.159135 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 17 12:12:21.159151 kernel:         Rude variant of Tasks RCU enabled.
Jan 17 12:12:21.159167 kernel:         Tracing variant of Tasks RCU enabled.
Jan 17 12:12:21.159188 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 17 12:12:21.159203 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Jan 17 12:12:21.159217 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Jan 17 12:12:21.159231 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 17 12:12:21.159244 kernel: Console: colour VGA+ 80x25
Jan 17 12:12:21.159331 kernel: printk: console [ttyS0] enabled
Jan 17 12:12:21.159346 kernel: ACPI: Core revision 20230628
Jan 17 12:12:21.159359 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
Jan 17 12:12:21.159373 kernel: APIC: Switch to symmetric I/O mode setup
Jan 17 12:12:21.159390 kernel: x2apic enabled
Jan 17 12:12:21.159404 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 17 12:12:21.159430 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns
Jan 17 12:12:21.159447 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998)
Jan 17 12:12:21.159462 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Jan 17 12:12:21.159477 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Jan 17 12:12:21.159491 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 17 12:12:21.159505 kernel: Spectre V2 : Mitigation: Retpolines
Jan 17 12:12:21.159519 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Jan 17 12:12:21.159533 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Jan 17 12:12:21.159548 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Jan 17 12:12:21.160598 kernel: RETBleed: Vulnerable
Jan 17 12:12:21.160621 kernel: Speculative Store Bypass: Vulnerable
Jan 17 12:12:21.160636 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Jan 17 12:12:21.160652 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Jan 17 12:12:21.160667 kernel: GDS: Unknown: Dependent on hypervisor status
Jan 17 12:12:21.160681 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 17 12:12:21.160695 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 17 12:12:21.160711 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 17 12:12:21.160730 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Jan 17 12:12:21.160744 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Jan 17 12:12:21.160759 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Jan 17 12:12:21.160773 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Jan 17 12:12:21.160785 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Jan 17 12:12:21.160800 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Jan 17 12:12:21.160816 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 17 12:12:21.160831 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Jan 17 12:12:21.160846 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Jan 17 12:12:21.160861 kernel: x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
Jan 17 12:12:21.160875 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
Jan 17 12:12:21.160893 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
Jan 17 12:12:21.160908 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
Jan 17 12:12:21.160923 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
Jan 17 12:12:21.160936 kernel: Freeing SMP alternatives memory: 32K
Jan 17 12:12:21.160951 kernel: pid_max: default: 32768 minimum: 301
Jan 17 12:12:21.160966 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 17 12:12:21.160980 kernel: landlock: Up and running.
Jan 17 12:12:21.160994 kernel: SELinux:  Initializing.
Jan 17 12:12:21.161008 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Jan 17 12:12:21.161022 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Jan 17 12:12:21.161036 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7)
Jan 17 12:12:21.161056 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 17 12:12:21.161071 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 17 12:12:21.161087 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 17 12:12:21.161103 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Jan 17 12:12:21.161119 kernel: signal: max sigframe size: 3632
Jan 17 12:12:21.161134 kernel: rcu: Hierarchical SRCU implementation.
Jan 17 12:12:21.161151 kernel: rcu:         Max phase no-delay instances is 400.
Jan 17 12:12:21.161166 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Jan 17 12:12:21.161181 kernel: smp: Bringing up secondary CPUs ...
Jan 17 12:12:21.161199 kernel: smpboot: x86: Booting SMP configuration:
Jan 17 12:12:21.161214 kernel: .... node  #0, CPUs:      #1
Jan 17 12:12:21.161231 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Jan 17 12:12:21.161247 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Jan 17 12:12:21.161263 kernel: smp: Brought up 1 node, 2 CPUs
Jan 17 12:12:21.161278 kernel: smpboot: Max logical packages: 1
Jan 17 12:12:21.161294 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS)
Jan 17 12:12:21.161310 kernel: devtmpfs: initialized
Jan 17 12:12:21.161327 kernel: x86/mm: Memory block size: 128MB
Jan 17 12:12:21.161342 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 17 12:12:21.161357 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Jan 17 12:12:21.161373 kernel: pinctrl core: initialized pinctrl subsystem
Jan 17 12:12:21.161388 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 17 12:12:21.161406 kernel: audit: initializing netlink subsys (disabled)
Jan 17 12:12:21.161424 kernel: audit: type=2000 audit(1737115940.770:1): state=initialized audit_enabled=0 res=1
Jan 17 12:12:21.161439 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 17 12:12:21.161457 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 17 12:12:21.161480 kernel: cpuidle: using governor menu
Jan 17 12:12:21.161497 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 17 12:12:21.161513 kernel: dca service started, version 1.12.1
Jan 17 12:12:21.161528 kernel: PCI: Using configuration type 1 for base access
Jan 17 12:12:21.161545 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 17 12:12:21.163622 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 17 12:12:21.163644 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 17 12:12:21.163661 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 17 12:12:21.163891 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 17 12:12:21.163973 kernel: ACPI: Added _OSI(Module Device)
Jan 17 12:12:21.163993 kernel: ACPI: Added _OSI(Processor Device)
Jan 17 12:12:21.164146 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 17 12:12:21.164166 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 17 12:12:21.164183 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Jan 17 12:12:21.164227 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Jan 17 12:12:21.164486 kernel: ACPI: Interpreter enabled
Jan 17 12:12:21.164508 kernel: ACPI: PM: (supports S0 S5)
Jan 17 12:12:21.164523 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 17 12:12:21.164725 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 17 12:12:21.164747 kernel: PCI: Using E820 reservations for host bridge windows
Jan 17 12:12:21.164762 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Jan 17 12:12:21.164776 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 17 12:12:21.165023 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Jan 17 12:12:21.165175 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Jan 17 12:12:21.165332 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Jan 17 12:12:21.165356 kernel: acpiphp: Slot [3] registered
Jan 17 12:12:21.165434 kernel: acpiphp: Slot [4] registered
Jan 17 12:12:21.165453 kernel: acpiphp: Slot [5] registered
Jan 17 12:12:21.165471 kernel: acpiphp: Slot [6] registered
Jan 17 12:12:21.165487 kernel: acpiphp: Slot [7] registered
Jan 17 12:12:21.165505 kernel: acpiphp: Slot [8] registered
Jan 17 12:12:21.165523 kernel: acpiphp: Slot [9] registered
Jan 17 12:12:21.165542 kernel: acpiphp: Slot [10] registered
Jan 17 12:12:21.166521 kernel: acpiphp: Slot [11] registered
Jan 17 12:12:21.166547 kernel: acpiphp: Slot [12] registered
Jan 17 12:12:21.166582 kernel: acpiphp: Slot [13] registered
Jan 17 12:12:21.166599 kernel: acpiphp: Slot [14] registered
Jan 17 12:12:21.166616 kernel: acpiphp: Slot [15] registered
Jan 17 12:12:21.166632 kernel: acpiphp: Slot [16] registered
Jan 17 12:12:21.166648 kernel: acpiphp: Slot [17] registered
Jan 17 12:12:21.166666 kernel: acpiphp: Slot [18] registered
Jan 17 12:12:21.166682 kernel: acpiphp: Slot [19] registered
Jan 17 12:12:21.166698 kernel: acpiphp: Slot [20] registered
Jan 17 12:12:21.166763 kernel: acpiphp: Slot [21] registered
Jan 17 12:12:21.166780 kernel: acpiphp: Slot [22] registered
Jan 17 12:12:21.166801 kernel: acpiphp: Slot [23] registered
Jan 17 12:12:21.166817 kernel: acpiphp: Slot [24] registered
Jan 17 12:12:21.166833 kernel: acpiphp: Slot [25] registered
Jan 17 12:12:21.166851 kernel: acpiphp: Slot [26] registered
Jan 17 12:12:21.166867 kernel: acpiphp: Slot [27] registered
Jan 17 12:12:21.166884 kernel: acpiphp: Slot [28] registered
Jan 17 12:12:21.166900 kernel: acpiphp: Slot [29] registered
Jan 17 12:12:21.166916 kernel: acpiphp: Slot [30] registered
Jan 17 12:12:21.166933 kernel: acpiphp: Slot [31] registered
Jan 17 12:12:21.166953 kernel: PCI host bridge to bus 0000:00
Jan 17 12:12:21.167215 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 17 12:12:21.170216 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 17 12:12:21.170541 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 17 12:12:21.171754 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Jan 17 12:12:21.171882 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 17 12:12:21.172045 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Jan 17 12:12:21.172289 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Jan 17 12:12:21.172437 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
Jan 17 12:12:21.174637 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Jan 17 12:12:21.174807 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Jan 17 12:12:21.174945 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
Jan 17 12:12:21.175450 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
Jan 17 12:12:21.175628 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
Jan 17 12:12:21.175779 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
Jan 17 12:12:21.175915 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
Jan 17 12:12:21.176049 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
Jan 17 12:12:21.176370 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
Jan 17 12:12:21.176514 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
Jan 17 12:12:21.178710 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Jan 17 12:12:21.178860 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 17 12:12:21.179172 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Jan 17 12:12:21.179525 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
Jan 17 12:12:21.179949 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Jan 17 12:12:21.180455 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
Jan 17 12:12:21.180482 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 17 12:12:21.180499 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 17 12:12:21.180523 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 17 12:12:21.180539 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 17 12:12:21.180602 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 17 12:12:21.180620 kernel: iommu: Default domain type: Translated
Jan 17 12:12:21.180636 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 17 12:12:21.180652 kernel: PCI: Using ACPI for IRQ routing
Jan 17 12:12:21.180669 kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 17 12:12:21.180685 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 17 12:12:21.180701 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff]
Jan 17 12:12:21.180880 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device
Jan 17 12:12:21.181196 kernel: pci 0000:00:03.0: vgaarb: bridge control possible
Jan 17 12:12:21.181337 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 17 12:12:21.181357 kernel: vgaarb: loaded
Jan 17 12:12:21.181418 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Jan 17 12:12:21.181436 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter
Jan 17 12:12:21.181453 kernel: clocksource: Switched to clocksource kvm-clock
Jan 17 12:12:21.181469 kernel: VFS: Disk quotas dquot_6.6.0
Jan 17 12:12:21.181482 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 17 12:12:21.181501 kernel: pnp: PnP ACPI init
Jan 17 12:12:21.181517 kernel: pnp: PnP ACPI: found 5 devices
Jan 17 12:12:21.181533 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 17 12:12:21.181548 kernel: NET: Registered PF_INET protocol family
Jan 17 12:12:21.181592 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Jan 17 12:12:21.181608 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Jan 17 12:12:21.181624 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 17 12:12:21.181640 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 17 12:12:21.181656 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear)
Jan 17 12:12:21.181675 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Jan 17 12:12:21.181691 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Jan 17 12:12:21.181707 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Jan 17 12:12:21.181723 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 17 12:12:21.181738 kernel: NET: Registered PF_XDP protocol family
Jan 17 12:12:21.181869 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 17 12:12:21.182052 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 17 12:12:21.182167 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 17 12:12:21.182354 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Jan 17 12:12:21.185584 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 17 12:12:21.185624 kernel: PCI: CLS 0 bytes, default 64
Jan 17 12:12:21.185641 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Jan 17 12:12:21.185657 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns
Jan 17 12:12:21.185714 kernel: clocksource: Switched to clocksource tsc
Jan 17 12:12:21.185731 kernel: Initialise system trusted keyrings
Jan 17 12:12:21.185747 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Jan 17 12:12:21.185803 kernel: Key type asymmetric registered
Jan 17 12:12:21.185820 kernel: Asymmetric key parser 'x509' registered
Jan 17 12:12:21.185836 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Jan 17 12:12:21.185882 kernel: io scheduler mq-deadline registered
Jan 17 12:12:21.185902 kernel: io scheduler kyber registered
Jan 17 12:12:21.185918 kernel: io scheduler bfq registered
Jan 17 12:12:21.185933 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Jan 17 12:12:21.185982 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 17 12:12:21.185998 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 17 12:12:21.186017 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 17 12:12:21.186066 kernel: i8042: Warning: Keylock active
Jan 17 12:12:21.186083 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 17 12:12:21.186098 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 17 12:12:21.186719 kernel: rtc_cmos 00:00: RTC can wake from S4
Jan 17 12:12:21.186854 kernel: rtc_cmos 00:00: registered as rtc0
Jan 17 12:12:21.186971 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:12:20 UTC (1737115940)
Jan 17 12:12:21.187125 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Jan 17 12:12:21.187152 kernel: intel_pstate: CPU model not supported
Jan 17 12:12:21.187168 kernel: NET: Registered PF_INET6 protocol family
Jan 17 12:12:21.187183 kernel: Segment Routing with IPv6
Jan 17 12:12:21.187198 kernel: In-situ OAM (IOAM) with IPv6
Jan 17 12:12:21.187213 kernel: NET: Registered PF_PACKET protocol family
Jan 17 12:12:21.187317 kernel: Key type dns_resolver registered
Jan 17 12:12:21.187335 kernel: IPI shorthand broadcast: enabled
Jan 17 12:12:21.187351 kernel: sched_clock: Marking stable (731002367, 213333562)->(1019248312, -74912383)
Jan 17 12:12:21.187366 kernel: registered taskstats version 1
Jan 17 12:12:21.187386 kernel: Loading compiled-in X.509 certificates
Jan 17 12:12:21.187401 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80'
Jan 17 12:12:21.187416 kernel: Key type .fscrypt registered
Jan 17 12:12:21.187430 kernel: Key type fscrypt-provisioning registered
Jan 17 12:12:21.187445 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 17 12:12:21.187460 kernel: ima: Allocated hash algorithm: sha1
Jan 17 12:12:21.187475 kernel: ima: No architecture policies found
Jan 17 12:12:21.187490 kernel: clk: Disabling unused clocks
Jan 17 12:12:21.187506 kernel: Freeing unused kernel image (initmem) memory: 42848K
Jan 17 12:12:21.187524 kernel: Write protecting the kernel read-only data: 36864k
Jan 17 12:12:21.187539 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K
Jan 17 12:12:21.187603 kernel: Run /init as init process
Jan 17 12:12:21.187618 kernel:   with arguments:
Jan 17 12:12:21.187633 kernel:     /init
Jan 17 12:12:21.187715 kernel:   with environment:
Jan 17 12:12:21.187732 kernel:     HOME=/
Jan 17 12:12:21.187857 kernel:     TERM=linux
Jan 17 12:12:21.187873 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 17 12:12:21.187900 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 17 12:12:21.187931 systemd[1]: Detected virtualization amazon.
Jan 17 12:12:21.187951 systemd[1]: Detected architecture x86-64.
Jan 17 12:12:21.187967 systemd[1]: Running in initrd.
Jan 17 12:12:21.187983 systemd[1]: No hostname configured, using default hostname.
Jan 17 12:12:21.188001 systemd[1]: Hostname set to <localhost>.
Jan 17 12:12:21.188018 systemd[1]: Initializing machine ID from VM UUID.
Jan 17 12:12:21.188035 systemd[1]: Queued start job for default target initrd.target.
Jan 17 12:12:21.188051 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 17 12:12:21.188198 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 17 12:12:21.188220 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 17 12:12:21.188237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 17 12:12:21.188254 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 17 12:12:21.188275 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 17 12:12:21.188294 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 17 12:12:21.188310 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 17 12:12:21.188327 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 17 12:12:21.188344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 17 12:12:21.188392 systemd[1]: Reached target paths.target - Path Units.
Jan 17 12:12:21.188409 systemd[1]: Reached target slices.target - Slice Units.
Jan 17 12:12:21.188429 systemd[1]: Reached target swap.target - Swaps.
Jan 17 12:12:21.188446 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Jan 17 12:12:21.188466 systemd[1]: Reached target timers.target - Timer Units.
Jan 17 12:12:21.188486 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 17 12:12:21.188503 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 17 12:12:21.188520 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 17 12:12:21.188536 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 17 12:12:21.193585 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 17 12:12:21.193678 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 17 12:12:21.193700 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 17 12:12:21.193729 systemd[1]: Reached target sockets.target - Socket Units.
Jan 17 12:12:21.193845 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 17 12:12:21.193866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 17 12:12:21.193885 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 17 12:12:21.193929 systemd[1]: Starting systemd-fsck-usr.service...
Jan 17 12:12:21.193953 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 17 12:12:21.193974 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 17 12:12:21.194017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 17 12:12:21.194036 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 17 12:12:21.194055 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 17 12:12:21.194073 systemd[1]: Finished systemd-fsck-usr.service.
Jan 17 12:12:21.194254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 17 12:12:21.194335 systemd-journald[178]: Collecting audit messages is disabled.
Jan 17 12:12:21.194432 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 17 12:12:21.194457 systemd-journald[178]: Journal started
Jan 17 12:12:21.194568 systemd-journald[178]: Runtime Journal (/run/log/journal/ec251fcde8d9b3bb0cf2cd90d112a223) is 4.8M, max 38.6M, 33.7M free.
Jan 17 12:12:21.137441 systemd-modules-load[179]: Inserted module 'overlay'
Jan 17 12:12:21.349084 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 17 12:12:21.349146 kernel: Bridge firewalling registered
Jan 17 12:12:21.204120 systemd-modules-load[179]: Inserted module 'br_netfilter'
Jan 17 12:12:21.352521 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 17 12:12:21.352899 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 17 12:12:21.355129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 17 12:12:21.369000 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 17 12:12:21.379846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 17 12:12:21.381713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 17 12:12:21.405935 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 17 12:12:21.412845 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 17 12:12:21.446930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 17 12:12:21.452090 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 17 12:12:21.455091 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 17 12:12:21.461811 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 17 12:12:21.470766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 17 12:12:21.496523 dracut-cmdline[211]: dracut-dracut-053
Jan 17 12:12:21.500976 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e
Jan 17 12:12:21.559377 systemd-resolved[213]: Positive Trust Anchors:
Jan 17 12:12:21.559402 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 17 12:12:21.559468 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 17 12:12:21.577595 systemd-resolved[213]: Defaulting to hostname 'linux'.
Jan 17 12:12:21.581120 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 17 12:12:21.584365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 17 12:12:21.645818 kernel: SCSI subsystem initialized
Jan 17 12:12:21.673609 kernel: Loading iSCSI transport class v2.0-870.
Jan 17 12:12:21.703608 kernel: iscsi: registered transport (tcp)
Jan 17 12:12:21.740760 kernel: iscsi: registered transport (qla4xxx)
Jan 17 12:12:21.740844 kernel: QLogic iSCSI HBA Driver
Jan 17 12:12:21.790923 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 17 12:12:21.802843 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 17 12:12:21.838578 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 17 12:12:21.838717 kernel: device-mapper: uevent: version 1.0.3
Jan 17 12:12:21.839862 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 17 12:12:21.883586 kernel: raid6: avx512x4 gen() 16952 MB/s
Jan 17 12:12:21.902701 kernel: raid6: avx512x2 gen() 10817 MB/s
Jan 17 12:12:21.919583 kernel: raid6: avx512x1 gen()  8002 MB/s
Jan 17 12:12:21.936601 kernel: raid6: avx2x4   gen() 14560 MB/s
Jan 17 12:12:21.953592 kernel: raid6: avx2x2   gen() 16498 MB/s
Jan 17 12:12:21.970581 kernel: raid6: avx2x1   gen() 12799 MB/s
Jan 17 12:12:21.970684 kernel: raid6: using algorithm avx512x4 gen() 16952 MB/s
Jan 17 12:12:21.987599 kernel: raid6: .... xor() 7390 MB/s, rmw enabled
Jan 17 12:12:21.987685 kernel: raid6: using avx512x2 recovery algorithm
Jan 17 12:12:22.024600 kernel: xor: automatically using best checksumming function   avx       
Jan 17 12:12:22.233587 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 17 12:12:22.244693 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 17 12:12:22.252890 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 17 12:12:22.275092 systemd-udevd[396]: Using default interface naming scheme 'v255'.
Jan 17 12:12:22.289083 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 17 12:12:22.330326 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 17 12:12:22.358378 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation
Jan 17 12:12:22.417187 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 17 12:12:22.428367 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 17 12:12:22.499896 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 17 12:12:22.512778 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 17 12:12:22.564938 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 17 12:12:22.567440 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 17 12:12:22.570687 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 17 12:12:22.577729 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 17 12:12:22.586721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 17 12:12:22.620070 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 17 12:12:22.674580 kernel: cryptd: max_cpu_qlen set to 1000
Jan 17 12:12:22.682802 kernel: ena 0000:00:05.0: ENA device version: 0.10
Jan 17 12:12:22.711143 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Jan 17 12:12:22.711353 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
Jan 17 12:12:22.711514 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:8b:3f:5c:a0:47
Jan 17 12:12:22.721469 (udev-worker)[452]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:12:22.725035 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 17 12:12:22.725099 kernel: AES CTR mode by8 optimization enabled
Jan 17 12:12:22.725440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 17 12:12:22.731206 kernel: nvme nvme0: pci function 0000:00:04.0
Jan 17 12:12:22.731439 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 17 12:12:22.725632 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 17 12:12:22.729396 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 17 12:12:22.745052 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 17 12:12:22.745289 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 17 12:12:22.756013 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Jan 17 12:12:22.746957 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 17 12:12:22.761940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 17 12:12:22.769294 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Jan 17 12:12:22.769366 kernel: GPT:9289727 != 16777215
Jan 17 12:12:22.769384 kernel: GPT:Alternate GPT header not at the end of the disk.
Jan 17 12:12:22.770610 kernel: GPT:9289727 != 16777215
Jan 17 12:12:22.770662 kernel: GPT: Use GNU Parted to correct GPT errors.
Jan 17 12:12:22.770682 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Jan 17 12:12:22.937579 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (452)
Jan 17 12:12:22.939602 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (449)
Jan 17 12:12:23.028200 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Jan 17 12:12:23.068928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 17 12:12:23.084456 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Jan 17 12:12:23.084717 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Jan 17 12:12:23.099548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Jan 17 12:12:23.106685 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Jan 17 12:12:23.116808 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 17 12:12:23.135851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 17 12:12:23.145913 disk-uuid[616]: Primary Header is updated.
Jan 17 12:12:23.145913 disk-uuid[616]: Secondary Entries is updated.
Jan 17 12:12:23.145913 disk-uuid[616]: Secondary Header is updated.
Jan 17 12:12:23.154600 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Jan 17 12:12:23.162514 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Jan 17 12:12:23.167982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 17 12:12:23.173622 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Jan 17 12:12:24.177633 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Jan 17 12:12:24.179020 disk-uuid[617]: The operation has completed successfully.
Jan 17 12:12:24.383921 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 17 12:12:24.384045 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 17 12:12:24.420516 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 17 12:12:24.437800 sh[968]: Success
Jan 17 12:12:24.463576 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Jan 17 12:12:24.604807 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 17 12:12:24.622741 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 17 12:12:24.650594 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 17 12:12:24.687227 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85
Jan 17 12:12:24.687293 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Jan 17 12:12:24.687313 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 17 12:12:24.687332 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 17 12:12:24.688810 kernel: BTRFS info (device dm-0): using free space tree
Jan 17 12:12:24.815884 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Jan 17 12:12:24.818697 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 17 12:12:24.819819 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 17 12:12:24.833809 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 17 12:12:24.837777 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 17 12:12:24.881689 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8
Jan 17 12:12:24.881761 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Jan 17 12:12:24.881783 kernel: BTRFS info (device nvme0n1p6): using free space tree
Jan 17 12:12:24.893916 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Jan 17 12:12:24.922831 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 17 12:12:24.924729 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8
Jan 17 12:12:24.931659 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 17 12:12:24.939972 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 17 12:12:25.024261 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 17 12:12:25.033964 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 17 12:12:25.068332 systemd-networkd[1160]: lo: Link UP
Jan 17 12:12:25.068344 systemd-networkd[1160]: lo: Gained carrier
Jan 17 12:12:25.071112 systemd-networkd[1160]: Enumeration completed
Jan 17 12:12:25.071802 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 17 12:12:25.071807 systemd-networkd[1160]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 17 12:12:25.074465 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 17 12:12:25.081810 systemd[1]: Reached target network.target - Network.
Jan 17 12:12:25.091375 systemd-networkd[1160]: eth0: Link UP
Jan 17 12:12:25.091384 systemd-networkd[1160]: eth0: Gained carrier
Jan 17 12:12:25.091441 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 17 12:12:25.110422 systemd-networkd[1160]: eth0: DHCPv4 address 172.31.29.55/20, gateway 172.31.16.1 acquired from 172.31.16.1
Jan 17 12:12:25.395485 ignition[1081]: Ignition 2.19.0
Jan 17 12:12:25.395501 ignition[1081]: Stage: fetch-offline
Jan 17 12:12:25.397495 ignition[1081]: no configs at "/usr/lib/ignition/base.d"
Jan 17 12:12:25.397524 ignition[1081]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 17 12:12:25.398876 ignition[1081]: Ignition finished successfully
Jan 17 12:12:25.403322 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 17 12:12:25.416046 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Jan 17 12:12:25.440295 ignition[1168]: Ignition 2.19.0
Jan 17 12:12:25.440313 ignition[1168]: Stage: fetch
Jan 17 12:12:25.441072 ignition[1168]: no configs at "/usr/lib/ignition/base.d"
Jan 17 12:12:25.441085 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 17 12:12:25.441342 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 17 12:12:25.496252 ignition[1168]: PUT result: OK
Jan 17 12:12:25.505214 ignition[1168]: parsed url from cmdline: ""
Jan 17 12:12:25.505225 ignition[1168]: no config URL provided
Jan 17 12:12:25.505235 ignition[1168]: reading system config file "/usr/lib/ignition/user.ign"
Jan 17 12:12:25.505252 ignition[1168]: no config at "/usr/lib/ignition/user.ign"
Jan 17 12:12:25.505278 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 17 12:12:25.507643 ignition[1168]: PUT result: OK
Jan 17 12:12:25.507707 ignition[1168]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Jan 17 12:12:25.509446 ignition[1168]: GET result: OK
Jan 17 12:12:25.511017 ignition[1168]: parsing config with SHA512: e6455041544de246fb7a0f64df7dd69c527ef750d7fbeb4853e52a4cc36ee19c5105887148d1830de8c56d3ce22caf0d059632ed561c4c7a44b32f57a62ddb57
Jan 17 12:12:25.521117 unknown[1168]: fetched base config from "system"
Jan 17 12:12:25.521134 unknown[1168]: fetched base config from "system"
Jan 17 12:12:25.521149 unknown[1168]: fetched user config from "aws"
Jan 17 12:12:25.523543 ignition[1168]: fetch: fetch complete
Jan 17 12:12:25.523566 ignition[1168]: fetch: fetch passed
Jan 17 12:12:25.523643 ignition[1168]: Ignition finished successfully
Jan 17 12:12:25.528464 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Jan 17 12:12:25.542831 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 17 12:12:25.573003 ignition[1175]: Ignition 2.19.0
Jan 17 12:12:25.573018 ignition[1175]: Stage: kargs
Jan 17 12:12:25.573876 ignition[1175]: no configs at "/usr/lib/ignition/base.d"
Jan 17 12:12:25.573891 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 17 12:12:25.574049 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 17 12:12:25.576229 ignition[1175]: PUT result: OK
Jan 17 12:12:25.584692 ignition[1175]: kargs: kargs passed
Jan 17 12:12:25.584755 ignition[1175]: Ignition finished successfully
Jan 17 12:12:25.593790 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 17 12:12:25.602426 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 17 12:12:25.668578 ignition[1181]: Ignition 2.19.0
Jan 17 12:12:25.668593 ignition[1181]: Stage: disks
Jan 17 12:12:25.669247 ignition[1181]: no configs at "/usr/lib/ignition/base.d"
Jan 17 12:12:25.669260 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 17 12:12:25.669367 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 17 12:12:25.670647 ignition[1181]: PUT result: OK
Jan 17 12:12:25.677881 ignition[1181]: disks: disks passed
Jan 17 12:12:25.677974 ignition[1181]: Ignition finished successfully
Jan 17 12:12:25.680418 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 17 12:12:25.680871 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 17 12:12:25.684015 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 17 12:12:25.686699 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 17 12:12:25.689301 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 17 12:12:25.691681 systemd[1]: Reached target basic.target - Basic System.
Jan 17 12:12:25.699110 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 17 12:12:25.742866 systemd-fsck[1189]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Jan 17 12:12:25.749913 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 17 12:12:25.774529 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 17 12:12:25.976769 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none.
Jan 17 12:12:25.977708 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 17 12:12:25.981000 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 17 12:12:25.996705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 17 12:12:26.001732 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 17 12:12:26.004032 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Jan 17 12:12:26.004234 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 17 12:12:26.004273 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 17 12:12:26.037778 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 17 12:12:26.047755 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 17 12:12:26.052657 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1208)
Jan 17 12:12:26.055255 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8
Jan 17 12:12:26.055311 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Jan 17 12:12:26.055324 kernel: BTRFS info (device nvme0n1p6): using free space tree
Jan 17 12:12:26.067600 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Jan 17 12:12:26.069253 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 17 12:12:26.363892 initrd-setup-root[1232]: cut: /sysroot/etc/passwd: No such file or directory
Jan 17 12:12:26.389093 initrd-setup-root[1239]: cut: /sysroot/etc/group: No such file or directory
Jan 17 12:12:26.399233 initrd-setup-root[1246]: cut: /sysroot/etc/shadow: No such file or directory
Jan 17 12:12:26.405151 initrd-setup-root[1253]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 17 12:12:26.705251 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 17 12:12:26.712779 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 17 12:12:26.725268 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 17 12:12:26.736289 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 17 12:12:26.737394 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8
Jan 17 12:12:26.783130 ignition[1320]: INFO     : Ignition 2.19.0
Jan 17 12:12:26.783130 ignition[1320]: INFO     : Stage: mount
Jan 17 12:12:26.804526 ignition[1320]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 17 12:12:26.804526 ignition[1320]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 17 12:12:26.809622 ignition[1320]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 17 12:12:26.811833 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 17 12:12:26.815143 ignition[1320]: INFO     : PUT result: OK
Jan 17 12:12:26.820849 ignition[1320]: INFO     : mount: mount passed
Jan 17 12:12:26.825790 ignition[1320]: INFO     : Ignition finished successfully
Jan 17 12:12:26.823407 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 17 12:12:26.832778 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 17 12:12:26.836651 systemd-networkd[1160]: eth0: Gained IPv6LL
Jan 17 12:12:26.984863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 17 12:12:27.032429 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1333)
Jan 17 12:12:27.032509 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8
Jan 17 12:12:27.032533 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Jan 17 12:12:27.033429 kernel: BTRFS info (device nvme0n1p6): using free space tree
Jan 17 12:12:27.040626 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Jan 17 12:12:27.043656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 17 12:12:27.112528 ignition[1350]: INFO     : Ignition 2.19.0
Jan 17 12:12:27.112528 ignition[1350]: INFO     : Stage: files
Jan 17 12:12:27.114838 ignition[1350]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 17 12:12:27.114838 ignition[1350]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 17 12:12:27.114838 ignition[1350]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 17 12:12:27.114838 ignition[1350]: INFO     : PUT result: OK
Jan 17 12:12:27.123443 ignition[1350]: DEBUG    : files: compiled without relabeling support, skipping
Jan 17 12:12:27.125844 ignition[1350]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 17 12:12:27.125844 ignition[1350]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 17 12:12:27.155836 ignition[1350]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 17 12:12:27.157781 ignition[1350]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 17 12:12:27.160934 ignition[1350]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 17 12:12:27.157837 unknown[1350]: wrote ssh authorized keys file for user: core
Jan 17 12:12:27.172934 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Jan 17 12:12:27.177302 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Jan 17 12:12:27.177302 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Jan 17 12:12:27.177302 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Jan 17 12:12:27.323503 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Jan 17 12:12:27.497711 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Jan 17 12:12:27.499985 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Jan 17 12:12:27.499985 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Jan 17 12:12:27.995784 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Jan 17 12:12:28.118096 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Jan 17 12:12:28.120764 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/install.sh"
Jan 17 12:12:28.123187 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh"
Jan 17 12:12:28.125646 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nginx.yaml"
Jan 17 12:12:28.129202 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml"
Jan 17 12:12:28.142137 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 17 12:12:28.144432 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 17 12:12:28.144432 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 17 12:12:28.149504 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 17 12:12:28.157777 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 17 12:12:28.157777 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 17 12:12:28.157777 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Jan 17 12:12:28.157777 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Jan 17 12:12:28.157777 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Jan 17 12:12:28.157777 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Jan 17 12:12:28.424573 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(c): GET result: OK
Jan 17 12:12:28.924574 ignition[1350]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Jan 17 12:12:28.929346 ignition[1350]: INFO     : files: op(d): [started]  processing unit "containerd.service"
Jan 17 12:12:28.934648 ignition[1350]: INFO     : files: op(d): op(e): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Jan 17 12:12:28.946823 ignition[1350]: INFO     : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Jan 17 12:12:28.946823 ignition[1350]: INFO     : files: op(d): [finished] processing unit "containerd.service"
Jan 17 12:12:28.946823 ignition[1350]: INFO     : files: op(f): [started]  processing unit "prepare-helm.service"
Jan 17 12:12:28.968085 ignition[1350]: INFO     : files: op(f): op(10): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 17 12:12:28.968085 ignition[1350]: INFO     : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 17 12:12:28.968085 ignition[1350]: INFO     : files: op(f): [finished] processing unit "prepare-helm.service"
Jan 17 12:12:28.968085 ignition[1350]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Jan 17 12:12:28.968085 ignition[1350]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Jan 17 12:12:28.995200 ignition[1350]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 17 12:12:28.995200 ignition[1350]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 17 12:12:28.995200 ignition[1350]: INFO     : files: files passed
Jan 17 12:12:28.995200 ignition[1350]: INFO     : Ignition finished successfully
Jan 17 12:12:28.985773 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 17 12:12:29.025862 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 17 12:12:29.044898 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 17 12:12:29.076782 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 17 12:12:29.076927 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 17 12:12:29.105408 initrd-setup-root-after-ignition[1379]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 17 12:12:29.107404 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 17 12:12:29.109826 initrd-setup-root-after-ignition[1379]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 17 12:12:29.109432 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 17 12:12:29.112204 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 17 12:12:29.133800 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 17 12:12:29.202287 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 17 12:12:29.202426 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 17 12:12:29.205364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 17 12:12:29.208177 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 17 12:12:29.211743 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 17 12:12:29.222695 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 17 12:12:29.299712 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 17 12:12:29.311393 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 17 12:12:29.330522 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 17 12:12:29.330773 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 17 12:12:29.337539 systemd[1]: Stopped target timers.target - Timer Units.
Jan 17 12:12:29.339517 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 17 12:12:29.340738 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 17 12:12:29.343739 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 17 12:12:29.347370 systemd[1]: Stopped target basic.target - Basic System.
Jan 17 12:12:29.349886 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 17 12:12:29.351879 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 17 12:12:29.358600 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 17 12:12:29.361628 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 17 12:12:29.364889 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 17 12:12:29.366200 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 17 12:12:29.370849 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 17 12:12:29.378380 systemd[1]: Stopped target swap.target - Swaps.
Jan 17 12:12:29.380705 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 17 12:12:29.380927 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 17 12:12:29.385226 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 17 12:12:29.389328 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 17 12:12:29.395719 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 17 12:12:29.395854 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 17 12:12:29.402939 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 17 12:12:29.403123 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 17 12:12:29.409012 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 17 12:12:29.409254 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 17 12:12:29.414779 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 17 12:12:29.417466 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 17 12:12:29.427944 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 17 12:12:29.431026 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 17 12:12:29.431300 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 17 12:12:29.438100 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 17 12:12:29.449911 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 17 12:12:29.452813 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 17 12:12:29.457738 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 17 12:12:29.457910 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 17 12:12:29.475661 ignition[1403]: INFO     : Ignition 2.19.0
Jan 17 12:12:29.475661 ignition[1403]: INFO     : Stage: umount
Jan 17 12:12:29.475472 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 17 12:12:29.483302 ignition[1403]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 17 12:12:29.483302 ignition[1403]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 17 12:12:29.483302 ignition[1403]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 17 12:12:29.475692 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 17 12:12:29.490309 ignition[1403]: INFO     : PUT result: OK
Jan 17 12:12:29.495077 ignition[1403]: INFO     : umount: umount passed
Jan 17 12:12:29.495077 ignition[1403]: INFO     : Ignition finished successfully
Jan 17 12:12:29.499421 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 17 12:12:29.500854 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 17 12:12:29.504654 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 17 12:12:29.504740 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 17 12:12:29.506573 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 17 12:12:29.506672 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 17 12:12:29.508977 systemd[1]: ignition-fetch.service: Deactivated successfully.
Jan 17 12:12:29.509044 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Jan 17 12:12:29.513461 systemd[1]: Stopped target network.target - Network.
Jan 17 12:12:29.518135 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 17 12:12:29.518235 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 17 12:12:29.520513 systemd[1]: Stopped target paths.target - Path Units.
Jan 17 12:12:29.522784 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 17 12:12:29.531285 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 17 12:12:29.537288 systemd[1]: Stopped target slices.target - Slice Units.
Jan 17 12:12:29.544759 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 17 12:12:29.547299 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 17 12:12:29.547354 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 17 12:12:29.552601 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 17 12:12:29.552676 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 17 12:12:29.554991 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 17 12:12:29.555066 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 17 12:12:29.557355 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 17 12:12:29.557532 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 17 12:12:29.559040 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 17 12:12:29.561528 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 17 12:12:29.569607 systemd-networkd[1160]: eth0: DHCPv6 lease lost
Jan 17 12:12:29.571386 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 17 12:12:29.573282 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 17 12:12:29.573391 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 17 12:12:29.596964 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 17 12:12:29.597187 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 17 12:12:29.600719 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 17 12:12:29.600865 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 17 12:12:29.605294 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 17 12:12:29.605405 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 17 12:12:29.607303 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 17 12:12:29.607383 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 17 12:12:29.617790 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 17 12:12:29.618944 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 17 12:12:29.619032 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 17 12:12:29.621465 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 17 12:12:29.621524 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 17 12:12:29.628701 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 17 12:12:29.628773 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 17 12:12:29.631200 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 17 12:12:29.631273 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 17 12:12:29.634820 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 17 12:12:29.657026 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 17 12:12:29.658699 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 17 12:12:29.663304 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 17 12:12:29.663405 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 17 12:12:29.664444 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 17 12:12:29.664513 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 17 12:12:29.667980 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 17 12:12:29.668059 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 17 12:12:29.678105 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 17 12:12:29.678180 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 17 12:12:29.680995 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 17 12:12:29.681078 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 17 12:12:29.685355 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 17 12:12:29.685496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 17 12:12:29.699835 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 17 12:12:29.701652 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 17 12:12:29.701804 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 17 12:12:29.705341 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 17 12:12:29.705404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 17 12:12:29.734071 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 17 12:12:29.734193 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 17 12:12:29.738138 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 17 12:12:29.746818 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 17 12:12:29.796275 systemd[1]: Switching root.
Jan 17 12:12:29.831949 systemd-journald[178]: Journal stopped
Jan 17 12:12:32.689772 systemd-journald[178]: Received SIGTERM from PID 1 (systemd).
Jan 17 12:12:32.689928 kernel: SELinux:  policy capability network_peer_controls=1
Jan 17 12:12:32.689951 kernel: SELinux:  policy capability open_perms=1
Jan 17 12:12:32.689969 kernel: SELinux:  policy capability extended_socket_class=1
Jan 17 12:12:32.689995 kernel: SELinux:  policy capability always_check_network=0
Jan 17 12:12:32.690015 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 17 12:12:32.690039 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 17 12:12:32.690056 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 17 12:12:32.690073 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 17 12:12:32.690098 kernel: audit: type=1403 audit(1737115950.653:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 17 12:12:32.690119 systemd[1]: Successfully loaded SELinux policy in 64.007ms.
Jan 17 12:12:32.690146 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.640ms.
Jan 17 12:12:32.690173 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 17 12:12:32.690193 systemd[1]: Detected virtualization amazon.
Jan 17 12:12:32.690212 systemd[1]: Detected architecture x86-64.
Jan 17 12:12:32.690230 systemd[1]: Detected first boot.
Jan 17 12:12:32.690249 systemd[1]: Initializing machine ID from VM UUID.
Jan 17 12:12:32.690268 zram_generator::config[1463]: No configuration found.
Jan 17 12:12:32.690291 systemd[1]: Populated /etc with preset unit settings.
Jan 17 12:12:32.690309 systemd[1]: Queued start job for default target multi-user.target.
Jan 17 12:12:32.690329 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Jan 17 12:12:32.690353 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 17 12:12:32.690372 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 17 12:12:32.690391 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 17 12:12:32.690409 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 17 12:12:32.690429 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 17 12:12:32.690451 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 17 12:12:32.690470 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 17 12:12:32.690488 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 17 12:12:32.693303 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 17 12:12:32.693355 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 17 12:12:32.694111 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 17 12:12:32.694146 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 17 12:12:32.694167 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 17 12:12:32.694195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 17 12:12:32.694215 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Jan 17 12:12:32.694234 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 17 12:12:32.694253 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 17 12:12:32.694462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 17 12:12:32.694491 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 17 12:12:32.694641 systemd[1]: Reached target slices.target - Slice Units.
Jan 17 12:12:32.694667 systemd[1]: Reached target swap.target - Swaps.
Jan 17 12:12:32.694686 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 17 12:12:32.694710 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 17 12:12:32.694728 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 17 12:12:32.698734 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 17 12:12:32.698764 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 17 12:12:32.698785 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 17 12:12:32.698803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 17 12:12:32.698822 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 17 12:12:32.698841 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 17 12:12:32.698860 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 17 12:12:32.698894 systemd[1]: Mounting media.mount - External Media Directory...
Jan 17 12:12:32.698913 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 17 12:12:32.698932 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 17 12:12:32.698951 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 17 12:12:32.698970 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 17 12:12:32.698989 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 17 12:12:32.699008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 17 12:12:32.699026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 17 12:12:32.699055 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 17 12:12:32.699073 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 17 12:12:32.699098 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 17 12:12:32.699117 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 17 12:12:32.699135 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 17 12:12:32.699152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 17 12:12:32.699172 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 17 12:12:32.699190 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Jan 17 12:12:32.699388 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.)
Jan 17 12:12:32.699411 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 17 12:12:32.699430 kernel: fuse: init (API version 7.39)
Jan 17 12:12:32.699450 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 17 12:12:32.699474 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 17 12:12:32.699494 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 17 12:12:32.699514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 17 12:12:32.699533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 17 12:12:32.699728 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 17 12:12:32.699752 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 17 12:12:32.699770 kernel: loop: module loaded
Jan 17 12:12:32.699788 systemd[1]: Mounted media.mount - External Media Directory.
Jan 17 12:12:32.699807 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 17 12:12:32.699825 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 17 12:12:32.699843 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 17 12:12:32.702819 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 17 12:12:32.702849 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 17 12:12:32.702869 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 17 12:12:32.702895 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 17 12:12:32.702914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 17 12:12:32.702933 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 17 12:12:32.702951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 17 12:12:32.703019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 17 12:12:32.703044 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 17 12:12:32.703064 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 17 12:12:32.703120 systemd-journald[1567]: Collecting audit messages is disabled.
Jan 17 12:12:32.703199 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 17 12:12:32.703219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 17 12:12:32.703236 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 17 12:12:32.703255 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 17 12:12:32.703277 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 17 12:12:32.703296 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 17 12:12:32.703315 systemd-journald[1567]: Journal started
Jan 17 12:12:32.703350 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec251fcde8d9b3bb0cf2cd90d112a223) is 4.8M, max 38.6M, 33.7M free.
Jan 17 12:12:32.721646 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 17 12:12:32.742586 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 17 12:12:32.742687 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 17 12:12:32.779732 kernel: ACPI: bus type drm_connector registered
Jan 17 12:12:32.793582 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 17 12:12:32.793685 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 17 12:12:32.799622 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 17 12:12:32.803647 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 17 12:12:32.818586 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 17 12:12:32.838298 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 17 12:12:32.838383 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 17 12:12:32.844531 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 17 12:12:32.853120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 17 12:12:32.859295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 17 12:12:32.861189 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 17 12:12:32.865067 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 17 12:12:32.867105 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 17 12:12:32.898922 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 17 12:12:32.905955 systemd-tmpfiles[1594]: ACLs are not supported, ignoring.
Jan 17 12:12:32.905975 systemd-tmpfiles[1594]: ACLs are not supported, ignoring.
Jan 17 12:12:32.910335 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 17 12:12:32.920053 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 17 12:12:32.927799 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 17 12:12:32.939181 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 17 12:12:32.954111 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 17 12:12:32.956820 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec251fcde8d9b3bb0cf2cd90d112a223 is 45.603ms for 960 entries.
Jan 17 12:12:32.956820 systemd-journald[1567]: System Journal (/var/log/journal/ec251fcde8d9b3bb0cf2cd90d112a223) is 8.0M, max 195.6M, 187.6M free.
Jan 17 12:12:33.017234 systemd-journald[1567]: Received client request to flush runtime journal.
Jan 17 12:12:32.968413 udevadm[1624]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Jan 17 12:12:33.020056 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 17 12:12:33.044344 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 17 12:12:33.056863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 17 12:12:33.093410 systemd-tmpfiles[1634]: ACLs are not supported, ignoring.
Jan 17 12:12:33.094014 systemd-tmpfiles[1634]: ACLs are not supported, ignoring.
Jan 17 12:12:33.105161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 17 12:12:34.024348 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 17 12:12:34.032862 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 17 12:12:34.090474 systemd-udevd[1640]: Using default interface naming scheme 'v255'.
Jan 17 12:12:34.193518 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 17 12:12:34.219715 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 17 12:12:34.284786 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 17 12:12:34.354950 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0.
Jan 17 12:12:34.358695 (udev-worker)[1643]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:12:34.394971 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 17 12:12:34.529692 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3
Jan 17 12:12:34.540413 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255
Jan 17 12:12:34.543631 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4
Jan 17 12:12:34.552175 kernel: ACPI: button: Power Button [PWRF]
Jan 17 12:12:34.552279 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5
Jan 17 12:12:34.559628 kernel: ACPI: button: Sleep Button [SLPF]
Jan 17 12:12:34.615108 systemd-networkd[1644]: lo: Link UP
Jan 17 12:12:34.615119 systemd-networkd[1644]: lo: Gained carrier
Jan 17 12:12:34.618449 systemd-networkd[1644]: Enumeration completed
Jan 17 12:12:34.618694 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 17 12:12:34.619598 systemd-networkd[1644]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 17 12:12:34.619605 systemd-networkd[1644]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 17 12:12:34.629907 systemd-networkd[1644]: eth0: Link UP
Jan 17 12:12:34.637217 systemd-networkd[1644]: eth0: Gained carrier
Jan 17 12:12:34.637374 systemd-networkd[1644]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 17 12:12:34.651627 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 17 12:12:34.665088 systemd-networkd[1644]: eth0: DHCPv4 address 172.31.29.55/20, gateway 172.31.16.1 acquired from 172.31.16.1
Jan 17 12:12:34.716096 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1650)
Jan 17 12:12:34.777601 kernel: mousedev: PS/2 mouse device common for all mice
Jan 17 12:12:34.836090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 17 12:12:34.953718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Jan 17 12:12:34.954357 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 17 12:12:34.990108 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 17 12:12:35.028583 lvm[1761]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 17 12:12:35.061129 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 17 12:12:35.189124 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 17 12:12:35.205102 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 17 12:12:35.207667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 17 12:12:35.228788 lvm[1766]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 17 12:12:35.275641 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 17 12:12:35.280388 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 17 12:12:35.282191 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 17 12:12:35.282228 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 17 12:12:35.283827 systemd[1]: Reached target machines.target - Containers.
Jan 17 12:12:35.286493 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Jan 17 12:12:35.295955 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 17 12:12:35.310366 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 17 12:12:35.313715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 17 12:12:35.332151 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 17 12:12:35.338913 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Jan 17 12:12:35.344964 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 17 12:12:35.348673 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 17 12:12:35.383981 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 17 12:12:35.417726 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 17 12:12:35.423106 kernel: loop0: detected capacity change from 0 to 142488
Jan 17 12:12:35.423103 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Jan 17 12:12:35.568261 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 17 12:12:35.606604 kernel: loop1: detected capacity change from 0 to 140768
Jan 17 12:12:35.761587 kernel: loop2: detected capacity change from 0 to 61336
Jan 17 12:12:35.902580 kernel: loop3: detected capacity change from 0 to 211296
Jan 17 12:12:36.041636 kernel: loop4: detected capacity change from 0 to 142488
Jan 17 12:12:36.094608 kernel: loop5: detected capacity change from 0 to 140768
Jan 17 12:12:36.122594 kernel: loop6: detected capacity change from 0 to 61336
Jan 17 12:12:36.135583 kernel: loop7: detected capacity change from 0 to 211296
Jan 17 12:12:36.180158 (sd-merge)[1788]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Jan 17 12:12:36.184362 (sd-merge)[1788]: Merged extensions into '/usr'.
Jan 17 12:12:36.192335 systemd[1]: Reloading requested from client PID 1775 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 17 12:12:36.192702 systemd[1]: Reloading...
Jan 17 12:12:36.279597 zram_generator::config[1813]: No configuration found.
Jan 17 12:12:36.540314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 17 12:12:36.562925 systemd-networkd[1644]: eth0: Gained IPv6LL
Jan 17 12:12:36.660759 systemd[1]: Reloading finished in 466 ms.
Jan 17 12:12:36.677380 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 17 12:12:36.679931 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 17 12:12:36.696391 systemd[1]: Starting ensure-sysext.service...
Jan 17 12:12:36.706605 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 17 12:12:36.734595 systemd[1]: Reloading requested from client PID 1872 ('systemctl') (unit ensure-sysext.service)...
Jan 17 12:12:36.734617 systemd[1]: Reloading...
Jan 17 12:12:36.748001 systemd-tmpfiles[1873]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 17 12:12:36.749379 systemd-tmpfiles[1873]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 17 12:12:36.751008 systemd-tmpfiles[1873]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 17 12:12:36.751705 systemd-tmpfiles[1873]: ACLs are not supported, ignoring.
Jan 17 12:12:36.752043 systemd-tmpfiles[1873]: ACLs are not supported, ignoring.
Jan 17 12:12:36.758704 systemd-tmpfiles[1873]: Detected autofs mount point /boot during canonicalization of boot.
Jan 17 12:12:36.758895 systemd-tmpfiles[1873]: Skipping /boot
Jan 17 12:12:36.778490 systemd-tmpfiles[1873]: Detected autofs mount point /boot during canonicalization of boot.
Jan 17 12:12:36.779524 systemd-tmpfiles[1873]: Skipping /boot
Jan 17 12:12:36.927707 zram_generator::config[1904]: No configuration found.
Jan 17 12:12:37.108773 ldconfig[1771]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 17 12:12:37.127297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 17 12:12:37.205369 systemd[1]: Reloading finished in 469 ms.
Jan 17 12:12:37.229144 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 17 12:12:37.241346 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 17 12:12:37.264030 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Jan 17 12:12:37.279857 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 17 12:12:37.288294 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 17 12:12:37.301224 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 17 12:12:37.337832 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 17 12:12:37.384509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 17 12:12:37.387774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 17 12:12:37.391884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 17 12:12:37.409051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 17 12:12:37.429707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 17 12:12:37.431811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 17 12:12:37.432024 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 17 12:12:37.449390 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 17 12:12:37.451894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 17 12:12:37.452747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 17 12:12:37.453258 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 17 12:12:37.457158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 17 12:12:37.457471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 17 12:12:37.465986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 17 12:12:37.466423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 17 12:12:37.476256 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 17 12:12:37.476997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 17 12:12:37.505125 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 17 12:12:37.516840 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 17 12:12:37.542805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 17 12:12:37.543738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 17 12:12:37.553577 augenrules[1996]: No rules
Jan 17 12:12:37.556039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 17 12:12:37.588392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 17 12:12:37.596717 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 17 12:12:37.616504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 17 12:12:37.618412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 17 12:12:37.620730 systemd[1]: Reached target time-set.target - System Time Set.
Jan 17 12:12:37.634061 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 17 12:12:37.635861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 17 12:12:37.641272 systemd-resolved[1968]: Positive Trust Anchors:
Jan 17 12:12:37.641295 systemd-resolved[1968]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 17 12:12:37.642193 systemd-resolved[1968]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 17 12:12:37.648437 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Jan 17 12:12:37.655117 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 17 12:12:37.661988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 17 12:12:37.666436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 17 12:12:37.674728 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 17 12:12:37.674999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 17 12:12:37.680247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 17 12:12:37.680478 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 17 12:12:37.694135 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 17 12:12:37.697841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 17 12:12:37.704127 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 17 12:12:37.706450 systemd-resolved[1968]: Defaulting to hostname 'linux'.
Jan 17 12:12:37.713344 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 17 12:12:37.715718 systemd[1]: Finished ensure-sysext.service.
Jan 17 12:12:37.728502 systemd[1]: Reached target network.target - Network.
Jan 17 12:12:37.730465 systemd[1]: Reached target network-online.target - Network is Online.
Jan 17 12:12:37.732186 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 17 12:12:37.733856 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 17 12:12:37.733933 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 17 12:12:37.733956 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 17 12:12:37.733981 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 17 12:12:37.735667 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 17 12:12:37.737367 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 17 12:12:37.739649 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 17 12:12:37.741782 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 17 12:12:37.743776 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 17 12:12:37.745626 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 17 12:12:37.745674 systemd[1]: Reached target paths.target - Path Units.
Jan 17 12:12:37.746861 systemd[1]: Reached target timers.target - Timer Units.
Jan 17 12:12:37.749415 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 17 12:12:37.753602 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 17 12:12:37.756898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 17 12:12:37.764108 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 17 12:12:37.766761 systemd[1]: Reached target sockets.target - Socket Units.
Jan 17 12:12:37.768114 systemd[1]: Reached target basic.target - Basic System.
Jan 17 12:12:37.769826 systemd[1]: System is tainted: cgroupsv1
Jan 17 12:12:37.769997 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 17 12:12:37.770048 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 17 12:12:37.772822 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 17 12:12:37.782795 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Jan 17 12:12:37.788912 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 17 12:12:37.803734 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 17 12:12:37.809035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 17 12:12:37.810940 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 17 12:12:37.814374 jq[2030]: false
Jan 17 12:12:37.834908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 17 12:12:37.895959 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 17 12:12:37.963824 systemd[1]: Started ntpd.service - Network Time Service.
Jan 17 12:12:37.975198 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 17 12:12:37.990471 extend-filesystems[2031]: Found loop4
Jan 17 12:12:37.990471 extend-filesystems[2031]: Found loop5
Jan 17 12:12:37.990471 extend-filesystems[2031]: Found loop6
Jan 17 12:12:37.990471 extend-filesystems[2031]: Found loop7
Jan 17 12:12:37.990471 extend-filesystems[2031]: Found nvme0n1
Jan 17 12:12:37.990471 extend-filesystems[2031]: Found nvme0n1p2
Jan 17 12:12:37.990471 extend-filesystems[2031]: Found nvme0n1p3
Jan 17 12:12:37.990471 extend-filesystems[2031]: Found usr
Jan 17 12:12:37.990471 extend-filesystems[2031]: Found nvme0n1p4
Jan 17 12:12:38.071687 extend-filesystems[2031]: Found nvme0n1p6
Jan 17 12:12:38.071687 extend-filesystems[2031]: Found nvme0n1p7
Jan 17 12:12:38.071687 extend-filesystems[2031]: Found nvme0n1p9
Jan 17 12:12:38.071687 extend-filesystems[2031]: Checking size of /dev/nvme0n1p9
Jan 17 12:12:37.992345 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Jan 17 12:12:38.028127 systemd[1]: Starting setup-oem.service - Setup OEM...
Jan 17 12:12:38.054834 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 17 12:12:38.092435 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 17 12:12:38.151716 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 17 12:12:38.155707 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 17 12:12:38.177540 systemd[1]: Starting update-engine.service - Update Engine...
Jan 17 12:12:38.180681 dbus-daemon[2028]: [system] SELinux support is enabled
Jan 17 12:12:38.182934 ntpd[2038]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting
Jan 17 12:12:38.187208 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting
Jan 17 12:12:38.187208 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Jan 17 12:12:38.187208 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: ----------------------------------------------------
Jan 17 12:12:38.187208 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: ntp-4 is maintained by Network Time Foundation,
Jan 17 12:12:38.187208 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Jan 17 12:12:38.187208 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: corporation.  Support and training for ntp-4 are
Jan 17 12:12:38.187208 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: available at https://www.nwtime.org/support
Jan 17 12:12:38.187208 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: ----------------------------------------------------
Jan 17 12:12:38.182964 ntpd[2038]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Jan 17 12:12:38.182976 ntpd[2038]: ----------------------------------------------------
Jan 17 12:12:38.182986 ntpd[2038]: ntp-4 is maintained by Network Time Foundation,
Jan 17 12:12:38.182998 ntpd[2038]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Jan 17 12:12:38.183007 ntpd[2038]: corporation.  Support and training for ntp-4 are
Jan 17 12:12:38.183017 ntpd[2038]: available at https://www.nwtime.org/support
Jan 17 12:12:38.183027 ntpd[2038]: ----------------------------------------------------
Jan 17 12:12:38.192144 dbus-daemon[2028]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1644 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Jan 17 12:12:38.195417 ntpd[2038]: proto: precision = 0.084 usec (-23)
Jan 17 12:12:38.196222 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: proto: precision = 0.084 usec (-23)
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: basedate set to 2025-01-05
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: gps base set to 2025-01-05 (week 2348)
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: Listen and drop on 0 v6wildcard [::]:123
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: Listen normally on 2 lo 127.0.0.1:123
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: Listen normally on 3 eth0 172.31.29.55:123
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: Listen normally on 4 lo [::1]:123
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: Listen normally on 5 eth0 [fe80::48b:3fff:fe5c:a047%2]:123
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: Listening on routing socket on fd #22 for interface updates
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jan 17 12:12:38.210646 ntpd[2038]: 17 Jan 12:12:38 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jan 17 12:12:38.197746 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 17 12:12:38.197118 ntpd[2038]: basedate set to 2025-01-05
Jan 17 12:12:38.209255 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 17 12:12:38.197278 ntpd[2038]: gps base set to 2025-01-05 (week 2348)
Jan 17 12:12:38.204057 ntpd[2038]: Listen and drop on 0 v6wildcard [::]:123
Jan 17 12:12:38.204123 ntpd[2038]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Jan 17 12:12:38.204336 ntpd[2038]: Listen normally on 2 lo 127.0.0.1:123
Jan 17 12:12:38.204380 ntpd[2038]: Listen normally on 3 eth0 172.31.29.55:123
Jan 17 12:12:38.204426 ntpd[2038]: Listen normally on 4 lo [::1]:123
Jan 17 12:12:38.204473 ntpd[2038]: Listen normally on 5 eth0 [fe80::48b:3fff:fe5c:a047%2]:123
Jan 17 12:12:38.204512 ntpd[2038]: Listening on routing socket on fd #22 for interface updates
Jan 17 12:12:38.207966 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jan 17 12:12:38.208007 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jan 17 12:12:38.232331 extend-filesystems[2031]: Resized partition /dev/nvme0n1p9
Jan 17 12:12:38.238365 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 17 12:12:38.239067 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 17 12:12:38.252980 systemd[1]: motdgen.service: Deactivated successfully.
Jan 17 12:12:38.265933 extend-filesystems[2068]: resize2fs 1.47.1 (20-May-2024)
Jan 17 12:12:38.259135 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 17 12:12:38.283579 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Jan 17 12:12:38.291592 jq[2064]: true
Jan 17 12:12:38.304943 coreos-metadata[2027]: Jan 17 12:12:38.295 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Jan 17 12:12:38.304943 coreos-metadata[2027]: Jan 17 12:12:38.299 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Jan 17 12:12:38.293953 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 17 12:12:38.318212 coreos-metadata[2027]: Jan 17 12:12:38.307 INFO Fetch successful
Jan 17 12:12:38.318212 coreos-metadata[2027]: Jan 17 12:12:38.307 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Jan 17 12:12:38.318212 coreos-metadata[2027]: Jan 17 12:12:38.308 INFO Fetch successful
Jan 17 12:12:38.318212 coreos-metadata[2027]: Jan 17 12:12:38.310 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Jan 17 12:12:38.318212 coreos-metadata[2027]: Jan 17 12:12:38.311 INFO Fetch successful
Jan 17 12:12:38.318212 coreos-metadata[2027]: Jan 17 12:12:38.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Jan 17 12:12:38.318212 coreos-metadata[2027]: Jan 17 12:12:38.311 INFO Fetch successful
Jan 17 12:12:38.318212 coreos-metadata[2027]: Jan 17 12:12:38.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Jan 17 12:12:38.314133 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 17 12:12:38.331061 coreos-metadata[2027]: Jan 17 12:12:38.319 INFO Fetch failed with 404: resource not found
Jan 17 12:12:38.331061 coreos-metadata[2027]: Jan 17 12:12:38.326 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Jan 17 12:12:38.331162 update_engine[2062]: I20250117 12:12:38.327147  2062 main.cc:92] Flatcar Update Engine starting
Jan 17 12:12:38.331162 update_engine[2062]: I20250117 12:12:38.329927  2062 update_check_scheduler.cc:74] Next update check in 6m28s
Jan 17 12:12:38.349215 coreos-metadata[2027]: Jan 17 12:12:38.335 INFO Fetch successful
Jan 17 12:12:38.349215 coreos-metadata[2027]: Jan 17 12:12:38.335 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Jan 17 12:12:38.349215 coreos-metadata[2027]: Jan 17 12:12:38.347 INFO Fetch successful
Jan 17 12:12:38.349215 coreos-metadata[2027]: Jan 17 12:12:38.347 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Jan 17 12:12:38.350724 coreos-metadata[2027]: Jan 17 12:12:38.350 INFO Fetch successful
Jan 17 12:12:38.350795 coreos-metadata[2027]: Jan 17 12:12:38.350 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Jan 17 12:12:38.358167 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 17 12:12:38.360718 coreos-metadata[2027]: Jan 17 12:12:38.358 INFO Fetch successful
Jan 17 12:12:38.360718 coreos-metadata[2027]: Jan 17 12:12:38.358 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Jan 17 12:12:38.363924 coreos-metadata[2027]: Jan 17 12:12:38.363 INFO Fetch successful
Jan 17 12:12:38.386151 (ntainerd)[2081]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 17 12:12:38.427782 jq[2080]: true
Jan 17 12:12:38.460589 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Jan 17 12:12:38.479304 dbus-daemon[2028]: [system] Successfully activated service 'org.freedesktop.systemd1'
Jan 17 12:12:38.481753 extend-filesystems[2068]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Jan 17 12:12:38.481753 extend-filesystems[2068]: old_desc_blocks = 1, new_desc_blocks = 1
Jan 17 12:12:38.481753 extend-filesystems[2068]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Jan 17 12:12:38.484299 systemd[1]: Started update-engine.service - Update Engine.
Jan 17 12:12:38.500338 tar[2073]: linux-amd64/helm
Jan 17 12:12:38.529846 extend-filesystems[2031]: Resized filesystem in /dev/nvme0n1p9
Jan 17 12:12:38.529846 extend-filesystems[2031]: Found nvme0n1p1
Jan 17 12:12:38.512847 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 17 12:12:38.513157 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 17 12:12:38.522490 systemd[1]: Finished setup-oem.service - Setup OEM.
Jan 17 12:12:38.550190 systemd-logind[2055]: Watching system buttons on /dev/input/event2 (Power Button)
Jan 17 12:12:38.550218 systemd-logind[2055]: Watching system buttons on /dev/input/event3 (Sleep Button)
Jan 17 12:12:38.550242 systemd-logind[2055]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Jan 17 12:12:38.551132 systemd-logind[2055]: New seat seat0.
Jan 17 12:12:38.566680 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 17 12:12:38.582346 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Jan 17 12:12:38.585813 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 17 12:12:38.586348 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 17 12:12:38.601663 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Jan 17 12:12:38.604672 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 17 12:12:38.604902 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 17 12:12:38.608612 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 17 12:12:38.613850 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 17 12:12:38.631590 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Jan 17 12:12:38.643667 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 17 12:12:38.791126 bash[2156]: Updated "/home/core/.ssh/authorized_keys"
Jan 17 12:12:38.793827 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 17 12:12:38.819943 systemd[1]: Starting sshkeys.service...
Jan 17 12:12:38.849588 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2124)
Jan 17 12:12:38.936863 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Jan 17 12:12:38.953027 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Jan 17 12:12:39.107641 amazon-ssm-agent[2135]: Initializing new seelog logger
Jan 17 12:12:39.112823 amazon-ssm-agent[2135]: New Seelog Logger Creation Complete
Jan 17 12:12:39.115269 amazon-ssm-agent[2135]: 2025/01/17 12:12:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 17 12:12:39.117590 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 17 12:12:39.117590 amazon-ssm-agent[2135]: 2025/01/17 12:12:39 processing appconfig overrides
Jan 17 12:12:39.117590 amazon-ssm-agent[2135]: 2025/01/17 12:12:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 17 12:12:39.117590 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 17 12:12:39.117590 amazon-ssm-agent[2135]: 2025/01/17 12:12:39 processing appconfig overrides
Jan 17 12:12:39.117590 amazon-ssm-agent[2135]: 2025/01/17 12:12:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 17 12:12:39.117590 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 17 12:12:39.117590 amazon-ssm-agent[2135]: 2025/01/17 12:12:39 processing appconfig overrides
Jan 17 12:12:39.132763 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO Proxy environment variables:
Jan 17 12:12:39.146975 amazon-ssm-agent[2135]: 2025/01/17 12:12:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 17 12:12:39.146975 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 17 12:12:39.146975 amazon-ssm-agent[2135]: 2025/01/17 12:12:39 processing appconfig overrides
Jan 17 12:12:39.241973 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO https_proxy:
Jan 17 12:12:39.357447 sshd_keygen[2077]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 17 12:12:39.357894 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO http_proxy:
Jan 17 12:12:39.369027 coreos-metadata[2179]: Jan 17 12:12:39.366 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Jan 17 12:12:39.389204 coreos-metadata[2179]: Jan 17 12:12:39.389 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Jan 17 12:12:39.392773 coreos-metadata[2179]: Jan 17 12:12:39.392 INFO Fetch successful
Jan 17 12:12:39.398075 coreos-metadata[2179]: Jan 17 12:12:39.392 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Jan 17 12:12:39.398407 locksmithd[2140]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 17 12:12:39.431839 coreos-metadata[2179]: Jan 17 12:12:39.404 INFO Fetch successful
Jan 17 12:12:39.431689 unknown[2179]: wrote ssh authorized keys file for user: core
Jan 17 12:12:39.474587 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO no_proxy:
Jan 17 12:12:39.539826 dbus-daemon[2028]: [system] Successfully activated service 'org.freedesktop.hostname1'
Jan 17 12:12:39.543289 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Jan 17 12:12:39.547601 dbus-daemon[2028]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2138 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Jan 17 12:12:39.563941 systemd[1]: Starting polkit.service - Authorization Manager...
Jan 17 12:12:39.569337 update-ssh-keys[2235]: Updated "/home/core/.ssh/authorized_keys"
Jan 17 12:12:39.572650 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Jan 17 12:12:39.580101 systemd[1]: Finished sshkeys.service.
Jan 17 12:12:39.588464 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO Checking if agent identity type OnPrem can be assumed
Jan 17 12:12:39.621171 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 17 12:12:39.635051 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 17 12:12:39.647997 polkitd[2246]: Started polkitd version 121
Jan 17 12:12:39.674254 polkitd[2246]: Loading rules from directory /etc/polkit-1/rules.d
Jan 17 12:12:39.697164 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO Checking if agent identity type EC2 can be assumed
Jan 17 12:12:39.689032 systemd[1]: issuegen.service: Deactivated successfully.
Jan 17 12:12:39.689360 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 17 12:12:39.697622 polkitd[2246]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 17 12:12:39.712058 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 17 12:12:39.727616 polkitd[2246]: Finished loading, compiling and executing 2 rules
Jan 17 12:12:39.736311 dbus-daemon[2028]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Jan 17 12:12:39.736486 systemd[1]: Started polkit.service - Authorization Manager.
Jan 17 12:12:39.746966 polkitd[2246]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 17 12:12:39.762979 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 17 12:12:39.784011 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 17 12:12:39.799060 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO Agent will take identity from EC2
Jan 17 12:12:39.803278 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Jan 17 12:12:39.805258 systemd[1]: Reached target getty.target - Login Prompts.
Jan 17 12:12:39.901715 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO [amazon-ssm-agent] using named pipe channel for IPC
Jan 17 12:12:39.933953 systemd-hostnamed[2138]: Hostname set to <ip-172-31-29-55> (transient)
Jan 17 12:12:39.934466 systemd-resolved[1968]: System hostname changed to 'ip-172-31-29-55'.
Jan 17 12:12:40.002388 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO [amazon-ssm-agent] using named pipe channel for IPC
Jan 17 12:12:40.101618 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO [amazon-ssm-agent] using named pipe channel for IPC
Jan 17 12:12:40.108572 containerd[2081]: time="2025-01-17T12:12:40.108455593Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO [amazon-ssm-agent] OS: linux, Arch: amd64
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO [amazon-ssm-agent] Starting Core Agent
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO [Registrar] Starting registrar module
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:39 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:40 INFO [EC2Identity] EC2 registration was successful.
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:40 INFO [CredentialRefresher] credentialRefresher has started
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:40 INFO [CredentialRefresher] Starting credentials refresher loop
Jan 17 12:12:40.168704 amazon-ssm-agent[2135]: 2025-01-17 12:12:40 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Jan 17 12:12:40.210212 amazon-ssm-agent[2135]: 2025-01-17 12:12:40 INFO [CredentialRefresher] Next credential rotation will be in 31.716660674233335 minutes
Jan 17 12:12:40.217212 containerd[2081]: time="2025-01-17T12:12:40.217119146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 17 12:12:40.221044 containerd[2081]: time="2025-01-17T12:12:40.220932663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 17 12:12:40.221858 containerd[2081]: time="2025-01-17T12:12:40.221647588Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 17 12:12:40.221858 containerd[2081]: time="2025-01-17T12:12:40.221702512Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 17 12:12:40.221993 containerd[2081]: time="2025-01-17T12:12:40.221885997Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 17 12:12:40.221993 containerd[2081]: time="2025-01-17T12:12:40.221908940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 17 12:12:40.221993 containerd[2081]: time="2025-01-17T12:12:40.221978750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 17 12:12:40.222094 containerd[2081]: time="2025-01-17T12:12:40.221998710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 17 12:12:40.223394 containerd[2081]: time="2025-01-17T12:12:40.222304284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 17 12:12:40.223394 containerd[2081]: time="2025-01-17T12:12:40.222331575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 17 12:12:40.223394 containerd[2081]: time="2025-01-17T12:12:40.222352634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 17 12:12:40.223394 containerd[2081]: time="2025-01-17T12:12:40.222367625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 17 12:12:40.223394 containerd[2081]: time="2025-01-17T12:12:40.222606462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 17 12:12:40.223394 containerd[2081]: time="2025-01-17T12:12:40.222996830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 17 12:12:40.223394 containerd[2081]: time="2025-01-17T12:12:40.223227523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 17 12:12:40.223394 containerd[2081]: time="2025-01-17T12:12:40.223249534Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 17 12:12:40.223394 containerd[2081]: time="2025-01-17T12:12:40.223351060Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 17 12:12:40.223784 containerd[2081]: time="2025-01-17T12:12:40.223403452Z" level=info msg="metadata content store policy set" policy=shared
Jan 17 12:12:40.229687 containerd[2081]: time="2025-01-17T12:12:40.229613514Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 17 12:12:40.230262 containerd[2081]: time="2025-01-17T12:12:40.229836551Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 17 12:12:40.230262 containerd[2081]: time="2025-01-17T12:12:40.229866569Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 17 12:12:40.230262 containerd[2081]: time="2025-01-17T12:12:40.229944254Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 17 12:12:40.230262 containerd[2081]: time="2025-01-17T12:12:40.229968784Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 17 12:12:40.230262 containerd[2081]: time="2025-01-17T12:12:40.230159611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 17 12:12:40.231025 containerd[2081]: time="2025-01-17T12:12:40.230999414Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 17 12:12:40.231360 containerd[2081]: time="2025-01-17T12:12:40.231249992Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 17 12:12:40.231460 containerd[2081]: time="2025-01-17T12:12:40.231439789Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 17 12:12:40.231533 containerd[2081]: time="2025-01-17T12:12:40.231518463Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 17 12:12:40.231628 containerd[2081]: time="2025-01-17T12:12:40.231612406Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 17 12:12:40.231719 containerd[2081]: time="2025-01-17T12:12:40.231705113Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231775014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231799255Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231822452Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231843951Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231862810Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231882366Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231912825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231935760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231955042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.231975116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.232028159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.232048819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.232066731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.233659 containerd[2081]: time="2025-01-17T12:12:40.232086000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.234209 containerd[2081]: time="2025-01-17T12:12:40.232106179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.234209 containerd[2081]: time="2025-01-17T12:12:40.232126923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.234209 containerd[2081]: time="2025-01-17T12:12:40.232150792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.234209 containerd[2081]: time="2025-01-17T12:12:40.232168916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.234209 containerd[2081]: time="2025-01-17T12:12:40.232273660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.234209 containerd[2081]: time="2025-01-17T12:12:40.232300304Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 17 12:12:40.234622 containerd[2081]: time="2025-01-17T12:12:40.232411046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.234970 containerd[2081]: time="2025-01-17T12:12:40.234941496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.235576 containerd[2081]: time="2025-01-17T12:12:40.235045081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 17 12:12:40.235576 containerd[2081]: time="2025-01-17T12:12:40.235117858Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 17 12:12:40.235576 containerd[2081]: time="2025-01-17T12:12:40.235145277Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 17 12:12:40.235576 containerd[2081]: time="2025-01-17T12:12:40.235163400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 17 12:12:40.235576 containerd[2081]: time="2025-01-17T12:12:40.235181494Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 17 12:12:40.235576 containerd[2081]: time="2025-01-17T12:12:40.235197200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.235576 containerd[2081]: time="2025-01-17T12:12:40.235216653Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 17 12:12:40.235576 containerd[2081]: time="2025-01-17T12:12:40.235270063Z" level=info msg="NRI interface is disabled by configuration."
Jan 17 12:12:40.235576 containerd[2081]: time="2025-01-17T12:12:40.235288916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 17 12:12:40.237039 containerd[2081]: time="2025-01-17T12:12:40.236945752Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 17 12:12:40.237329 containerd[2081]: time="2025-01-17T12:12:40.237306406Z" level=info msg="Connect containerd service"
Jan 17 12:12:40.238211 containerd[2081]: time="2025-01-17T12:12:40.237448972Z" level=info msg="using legacy CRI server"
Jan 17 12:12:40.238211 containerd[2081]: time="2025-01-17T12:12:40.237526311Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 17 12:12:40.238211 containerd[2081]: time="2025-01-17T12:12:40.237705531Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 17 12:12:40.239380 containerd[2081]: time="2025-01-17T12:12:40.239287652Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 17 12:12:40.239823 containerd[2081]: time="2025-01-17T12:12:40.239783918Z" level=info msg="Start subscribing containerd event"
Jan 17 12:12:40.240275 containerd[2081]: time="2025-01-17T12:12:40.240248825Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 17 12:12:40.240362 containerd[2081]: time="2025-01-17T12:12:40.240329085Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 17 12:12:40.240438 containerd[2081]: time="2025-01-17T12:12:40.240265102Z" level=info msg="Start recovering state"
Jan 17 12:12:40.240848 containerd[2081]: time="2025-01-17T12:12:40.240810443Z" level=info msg="Start event monitor"
Jan 17 12:12:40.241019 containerd[2081]: time="2025-01-17T12:12:40.241002949Z" level=info msg="Start snapshots syncer"
Jan 17 12:12:40.241105 containerd[2081]: time="2025-01-17T12:12:40.241091774Z" level=info msg="Start cni network conf syncer for default"
Jan 17 12:12:40.241220 containerd[2081]: time="2025-01-17T12:12:40.241202693Z" level=info msg="Start streaming server"
Jan 17 12:12:40.241728 systemd[1]: Started containerd.service - containerd container runtime.
Jan 17 12:12:40.244590 containerd[2081]: time="2025-01-17T12:12:40.244317804Z" level=info msg="containerd successfully booted in 0.138157s"
Jan 17 12:12:40.520236 tar[2073]: linux-amd64/LICENSE
Jan 17 12:12:40.520820 tar[2073]: linux-amd64/README.md
Jan 17 12:12:40.548892 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Jan 17 12:12:41.214655 amazon-ssm-agent[2135]: 2025-01-17 12:12:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Jan 17 12:12:41.313445 amazon-ssm-agent[2135]: 2025-01-17 12:12:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2315) started
Jan 17 12:12:41.416174 amazon-ssm-agent[2135]: 2025-01-17 12:12:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Jan 17 12:12:41.836901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 17 12:12:41.841349 (kubelet)[2334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 17 12:12:41.841538 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 17 12:12:41.843427 systemd[1]: Startup finished in 10.791s (kernel) + 11.253s (userspace) = 22.044s.
Jan 17 12:12:43.490948 kubelet[2334]: E0117 12:12:43.490855    2334 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 17 12:12:43.497482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 17 12:12:43.500151 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 17 12:12:46.395226 systemd-resolved[1968]: Clock change detected. Flushing caches.
Jan 17 12:12:46.887279 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 17 12:12:46.893342 systemd[1]: Started sshd@0-172.31.29.55:22-139.178.89.65:44300.service - OpenSSH per-connection server daemon (139.178.89.65:44300).
Jan 17 12:12:47.084603 sshd[2347]: Accepted publickey for core from 139.178.89.65 port 44300 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:12:47.087558 sshd[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:12:47.098713 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 17 12:12:47.109590 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 17 12:12:47.122278 systemd-logind[2055]: New session 1 of user core.
Jan 17 12:12:47.150102 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 17 12:12:47.159806 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 17 12:12:47.164810 (systemd)[2353]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 17 12:12:47.282026 systemd[2353]: Queued start job for default target default.target.
Jan 17 12:12:47.282525 systemd[2353]: Created slice app.slice - User Application Slice.
Jan 17 12:12:47.282557 systemd[2353]: Reached target paths.target - Paths.
Jan 17 12:12:47.282576 systemd[2353]: Reached target timers.target - Timers.
Jan 17 12:12:47.296115 systemd[2353]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 17 12:12:47.312155 systemd[2353]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 17 12:12:47.312234 systemd[2353]: Reached target sockets.target - Sockets.
Jan 17 12:12:47.312252 systemd[2353]: Reached target basic.target - Basic System.
Jan 17 12:12:47.312313 systemd[2353]: Reached target default.target - Main User Target.
Jan 17 12:12:47.312352 systemd[2353]: Startup finished in 140ms.
Jan 17 12:12:47.312939 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 17 12:12:47.319312 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 17 12:12:47.488943 systemd[1]: Started sshd@1-172.31.29.55:22-139.178.89.65:44310.service - OpenSSH per-connection server daemon (139.178.89.65:44310).
Jan 17 12:12:47.671532 sshd[2365]: Accepted publickey for core from 139.178.89.65 port 44310 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:12:47.675242 sshd[2365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:12:47.681544 systemd-logind[2055]: New session 2 of user core.
Jan 17 12:12:47.685332 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 17 12:12:47.816923 sshd[2365]: pam_unix(sshd:session): session closed for user core
Jan 17 12:12:47.823650 systemd[1]: sshd@1-172.31.29.55:22-139.178.89.65:44310.service: Deactivated successfully.
Jan 17 12:12:47.843056 systemd-logind[2055]: Session 2 logged out. Waiting for processes to exit.
Jan 17 12:12:47.849351 systemd[1]: session-2.scope: Deactivated successfully.
Jan 17 12:12:47.860959 systemd[1]: Started sshd@2-172.31.29.55:22-139.178.89.65:44312.service - OpenSSH per-connection server daemon (139.178.89.65:44312).
Jan 17 12:12:47.863400 systemd-logind[2055]: Removed session 2.
Jan 17 12:12:48.035234 sshd[2373]: Accepted publickey for core from 139.178.89.65 port 44312 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:12:48.038125 sshd[2373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:12:48.045043 systemd-logind[2055]: New session 3 of user core.
Jan 17 12:12:48.051333 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 17 12:12:48.171062 sshd[2373]: pam_unix(sshd:session): session closed for user core
Jan 17 12:12:48.178521 systemd[1]: sshd@2-172.31.29.55:22-139.178.89.65:44312.service: Deactivated successfully.
Jan 17 12:12:48.181738 systemd[1]: session-3.scope: Deactivated successfully.
Jan 17 12:12:48.183158 systemd-logind[2055]: Session 3 logged out. Waiting for processes to exit.
Jan 17 12:12:48.184851 systemd-logind[2055]: Removed session 3.
Jan 17 12:12:48.207550 systemd[1]: Started sshd@3-172.31.29.55:22-139.178.89.65:44320.service - OpenSSH per-connection server daemon (139.178.89.65:44320).
Jan 17 12:12:48.366789 sshd[2381]: Accepted publickey for core from 139.178.89.65 port 44320 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:12:48.368694 sshd[2381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:12:48.376129 systemd-logind[2055]: New session 4 of user core.
Jan 17 12:12:48.382311 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 17 12:12:48.513292 sshd[2381]: pam_unix(sshd:session): session closed for user core
Jan 17 12:12:48.530105 systemd[1]: sshd@3-172.31.29.55:22-139.178.89.65:44320.service: Deactivated successfully.
Jan 17 12:12:48.549444 systemd-logind[2055]: Session 4 logged out. Waiting for processes to exit.
Jan 17 12:12:48.553052 systemd[1]: session-4.scope: Deactivated successfully.
Jan 17 12:12:48.567856 systemd[1]: Started sshd@4-172.31.29.55:22-139.178.89.65:44332.service - OpenSSH per-connection server daemon (139.178.89.65:44332).
Jan 17 12:12:48.569102 systemd-logind[2055]: Removed session 4.
Jan 17 12:12:48.726953 sshd[2389]: Accepted publickey for core from 139.178.89.65 port 44332 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:12:48.729432 sshd[2389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:12:48.739988 systemd-logind[2055]: New session 5 of user core.
Jan 17 12:12:48.744547 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 17 12:12:48.895989 sudo[2393]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Jan 17 12:12:48.896526 sudo[2393]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 17 12:12:48.912564 sudo[2393]: pam_unix(sudo:session): session closed for user root
Jan 17 12:12:48.935191 sshd[2389]: pam_unix(sshd:session): session closed for user core
Jan 17 12:12:48.941054 systemd[1]: sshd@4-172.31.29.55:22-139.178.89.65:44332.service: Deactivated successfully.
Jan 17 12:12:48.946652 systemd-logind[2055]: Session 5 logged out. Waiting for processes to exit.
Jan 17 12:12:48.946952 systemd[1]: session-5.scope: Deactivated successfully.
Jan 17 12:12:48.951451 systemd-logind[2055]: Removed session 5.
Jan 17 12:12:48.964312 systemd[1]: Started sshd@5-172.31.29.55:22-139.178.89.65:44348.service - OpenSSH per-connection server daemon (139.178.89.65:44348).
Jan 17 12:12:49.134801 sshd[2398]: Accepted publickey for core from 139.178.89.65 port 44348 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:12:49.140180 sshd[2398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:12:49.162870 systemd-logind[2055]: New session 6 of user core.
Jan 17 12:12:49.178816 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 17 12:12:49.280670 sudo[2403]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Jan 17 12:12:49.281081 sudo[2403]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 17 12:12:49.285029 sudo[2403]: pam_unix(sudo:session): session closed for user root
Jan 17 12:12:49.292277 sudo[2402]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Jan 17 12:12:49.292692 sudo[2402]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 17 12:12:49.313824 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Jan 17 12:12:49.317471 auditctl[2406]: No rules
Jan 17 12:12:49.318055 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 17 12:12:49.318397 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Jan 17 12:12:49.327663 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Jan 17 12:12:49.360937 augenrules[2425]: No rules
Jan 17 12:12:49.363104 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Jan 17 12:12:49.368483 sudo[2402]: pam_unix(sudo:session): session closed for user root
Jan 17 12:12:49.391963 sshd[2398]: pam_unix(sshd:session): session closed for user core
Jan 17 12:12:49.402208 systemd[1]: sshd@5-172.31.29.55:22-139.178.89.65:44348.service: Deactivated successfully.
Jan 17 12:12:49.409719 systemd[1]: session-6.scope: Deactivated successfully.
Jan 17 12:12:49.411285 systemd-logind[2055]: Session 6 logged out. Waiting for processes to exit.
Jan 17 12:12:49.432708 systemd[1]: Started sshd@6-172.31.29.55:22-139.178.89.65:44350.service - OpenSSH per-connection server daemon (139.178.89.65:44350).
Jan 17 12:12:49.436603 systemd-logind[2055]: Removed session 6.
Jan 17 12:12:49.594816 sshd[2434]: Accepted publickey for core from 139.178.89.65 port 44350 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:12:49.596548 sshd[2434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:12:49.603654 systemd-logind[2055]: New session 7 of user core.
Jan 17 12:12:49.612354 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 17 12:12:49.711119 sudo[2438]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 17 12:12:49.711626 sudo[2438]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 17 12:12:50.512141 systemd[1]: Starting docker.service - Docker Application Container Engine...
Jan 17 12:12:50.512392 (dockerd)[2454]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Jan 17 12:12:51.351816 dockerd[2454]: time="2025-01-17T12:12:51.351697326Z" level=info msg="Starting up"
Jan 17 12:12:51.577966 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3714290509-merged.mount: Deactivated successfully.
Jan 17 12:12:51.935765 dockerd[2454]: time="2025-01-17T12:12:51.935377003Z" level=info msg="Loading containers: start."
Jan 17 12:12:52.137278 kernel: Initializing XFRM netlink socket
Jan 17 12:12:52.185367 (udev-worker)[2522]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:12:52.284970 systemd-networkd[1644]: docker0: Link UP
Jan 17 12:12:52.299892 dockerd[2454]: time="2025-01-17T12:12:52.299848824Z" level=info msg="Loading containers: done."
Jan 17 12:12:52.340776 dockerd[2454]: time="2025-01-17T12:12:52.340597159Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Jan 17 12:12:52.341154 dockerd[2454]: time="2025-01-17T12:12:52.340848751Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0
Jan 17 12:12:52.341231 dockerd[2454]: time="2025-01-17T12:12:52.341172560Z" level=info msg="Daemon has completed initialization"
Jan 17 12:12:52.414270 dockerd[2454]: time="2025-01-17T12:12:52.413921385Z" level=info msg="API listen on /run/docker.sock"
Jan 17 12:12:52.415078 systemd[1]: Started docker.service - Docker Application Container Engine.
Jan 17 12:12:54.026822 containerd[2081]: time="2025-01-17T12:12:54.025997297Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\""
Jan 17 12:12:54.733744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jan 17 12:12:54.741503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 17 12:12:54.783094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480003533.mount: Deactivated successfully.
Jan 17 12:12:55.083254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 17 12:12:55.100097 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 17 12:12:55.227910 kubelet[2623]: E0117 12:12:55.227859    2623 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 17 12:12:55.237377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 17 12:12:55.237734 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 17 12:12:57.395924 containerd[2081]: time="2025-01-17T12:12:57.395872377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:12:57.397205 containerd[2081]: time="2025-01-17T12:12:57.397144741Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730"
Jan 17 12:12:57.399464 containerd[2081]: time="2025-01-17T12:12:57.398149731Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:12:57.404590 containerd[2081]: time="2025-01-17T12:12:57.404531622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:12:57.406922 containerd[2081]: time="2025-01-17T12:12:57.406872178Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 3.380829426s"
Jan 17 12:12:57.407068 containerd[2081]: time="2025-01-17T12:12:57.406932079Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\""
Jan 17 12:12:57.432586 containerd[2081]: time="2025-01-17T12:12:57.432543403Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\""
Jan 17 12:13:00.341164 containerd[2081]: time="2025-01-17T12:13:00.341105262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:00.350953 containerd[2081]: time="2025-01-17T12:13:00.350889746Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641"
Jan 17 12:13:00.353139 containerd[2081]: time="2025-01-17T12:13:00.353062022Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:00.356674 containerd[2081]: time="2025-01-17T12:13:00.356225642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:00.357556 containerd[2081]: time="2025-01-17T12:13:00.357507849Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 2.924924127s"
Jan 17 12:13:00.357665 containerd[2081]: time="2025-01-17T12:13:00.357560685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\""
Jan 17 12:13:00.386564 containerd[2081]: time="2025-01-17T12:13:00.386523684Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\""
Jan 17 12:13:03.563565 containerd[2081]: time="2025-01-17T12:13:03.563506616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:03.566348 containerd[2081]: time="2025-01-17T12:13:03.566106598Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841"
Jan 17 12:13:03.568027 containerd[2081]: time="2025-01-17T12:13:03.567942526Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:03.574256 containerd[2081]: time="2025-01-17T12:13:03.572549974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:03.575352 containerd[2081]: time="2025-01-17T12:13:03.575071743Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 3.188304596s"
Jan 17 12:13:03.575470 containerd[2081]: time="2025-01-17T12:13:03.575362350Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\""
Jan 17 12:13:03.647833 containerd[2081]: time="2025-01-17T12:13:03.647782407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\""
Jan 17 12:13:05.336673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Jan 17 12:13:05.355768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 17 12:13:05.615913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175751794.mount: Deactivated successfully.
Jan 17 12:13:05.689113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 17 12:13:05.705333 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 17 12:13:05.799561 kubelet[2717]: E0117 12:13:05.799457    2717 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 17 12:13:05.804191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 17 12:13:05.804417 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 17 12:13:06.408109 containerd[2081]: time="2025-01-17T12:13:06.408054435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:06.409510 containerd[2081]: time="2025-01-17T12:13:06.409351669Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941"
Jan 17 12:13:06.411640 containerd[2081]: time="2025-01-17T12:13:06.410498327Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:06.414201 containerd[2081]: time="2025-01-17T12:13:06.414158506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:06.415992 containerd[2081]: time="2025-01-17T12:13:06.415810653Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.767978184s"
Jan 17 12:13:06.415992 containerd[2081]: time="2025-01-17T12:13:06.415869720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\""
Jan 17 12:13:06.447266 containerd[2081]: time="2025-01-17T12:13:06.447184965Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Jan 17 12:13:07.044061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1447859072.mount: Deactivated successfully.
Jan 17 12:13:08.711055 containerd[2081]: time="2025-01-17T12:13:08.710968542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:08.712645 containerd[2081]: time="2025-01-17T12:13:08.712590979Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761"
Jan 17 12:13:08.713384 containerd[2081]: time="2025-01-17T12:13:08.713331580Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:08.724661 containerd[2081]: time="2025-01-17T12:13:08.723223822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:08.724661 containerd[2081]: time="2025-01-17T12:13:08.724470147Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.277197229s"
Jan 17 12:13:08.724661 containerd[2081]: time="2025-01-17T12:13:08.724521393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Jan 17 12:13:08.779318 containerd[2081]: time="2025-01-17T12:13:08.779275601Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Jan 17 12:13:09.285275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276762419.mount: Deactivated successfully.
Jan 17 12:13:09.294210 containerd[2081]: time="2025-01-17T12:13:09.294156598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:09.295373 containerd[2081]: time="2025-01-17T12:13:09.295323745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290"
Jan 17 12:13:09.298252 containerd[2081]: time="2025-01-17T12:13:09.296566440Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:09.301624 containerd[2081]: time="2025-01-17T12:13:09.300194948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:09.301624 containerd[2081]: time="2025-01-17T12:13:09.301399324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 522.075757ms"
Jan 17 12:13:09.301624 containerd[2081]: time="2025-01-17T12:13:09.301441480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Jan 17 12:13:09.326935 containerd[2081]: time="2025-01-17T12:13:09.326889743Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Jan 17 12:13:09.902538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177878321.mount: Deactivated successfully.
Jan 17 12:13:11.154713 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 17 12:13:13.120848 containerd[2081]: time="2025-01-17T12:13:13.120785139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:13.126960 containerd[2081]: time="2025-01-17T12:13:13.123918493Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625"
Jan 17 12:13:13.126960 containerd[2081]: time="2025-01-17T12:13:13.124692248Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:13.132794 containerd[2081]: time="2025-01-17T12:13:13.132709473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:13.140214 containerd[2081]: time="2025-01-17T12:13:13.140161628Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.813228747s"
Jan 17 12:13:13.140214 containerd[2081]: time="2025-01-17T12:13:13.140217736Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\""
Jan 17 12:13:15.838518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Jan 17 12:13:15.850364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 17 12:13:16.174524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 17 12:13:16.198441 (kubelet)[2913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 17 12:13:16.323031 kubelet[2913]: E0117 12:13:16.319686    2913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 17 12:13:16.327359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 17 12:13:16.327572 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 17 12:13:17.698465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 17 12:13:17.717550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 17 12:13:17.809591 systemd[1]: Reloading requested from client PID 2930 ('systemctl') (unit session-7.scope)...
Jan 17 12:13:17.809787 systemd[1]: Reloading...
Jan 17 12:13:18.189364 zram_generator::config[2976]: No configuration found.
Jan 17 12:13:18.414966 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 17 12:13:18.599676 systemd[1]: Reloading finished in 787 ms.
Jan 17 12:13:18.703699 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Jan 17 12:13:18.703888 systemd[1]: kubelet.service: Failed with result 'signal'.
Jan 17 12:13:18.704859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 17 12:13:18.729403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 17 12:13:19.017270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 17 12:13:19.041820 (kubelet)[3039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 17 12:13:19.203959 kubelet[3039]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 17 12:13:19.203959 kubelet[3039]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 17 12:13:19.203959 kubelet[3039]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 17 12:13:19.205796 kubelet[3039]: I0117 12:13:19.204092    3039 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 17 12:13:20.064715 kubelet[3039]: I0117 12:13:20.064646    3039 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Jan 17 12:13:20.064715 kubelet[3039]: I0117 12:13:20.064708    3039 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 17 12:13:20.065449 kubelet[3039]: I0117 12:13:20.065409    3039 server.go:919] "Client rotation is on, will bootstrap in background"
Jan 17 12:13:20.153185 kubelet[3039]: I0117 12:13:20.153140    3039 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 17 12:13:20.158992 kubelet[3039]: E0117 12:13:20.158930    3039 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:20.177241 kubelet[3039]: I0117 12:13:20.176743    3039 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 17 12:13:20.182220 kubelet[3039]: I0117 12:13:20.182171    3039 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 17 12:13:20.183723 kubelet[3039]: I0117 12:13:20.183580    3039 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 17 12:13:20.183935 kubelet[3039]: I0117 12:13:20.183730    3039 topology_manager.go:138] "Creating topology manager with none policy"
Jan 17 12:13:20.183935 kubelet[3039]: I0117 12:13:20.183749    3039 container_manager_linux.go:301] "Creating device plugin manager"
Jan 17 12:13:20.183935 kubelet[3039]: I0117 12:13:20.183894    3039 state_mem.go:36] "Initialized new in-memory state store"
Jan 17 12:13:20.184069 kubelet[3039]: I0117 12:13:20.184055    3039 kubelet.go:396] "Attempting to sync node with API server"
Jan 17 12:13:20.184162 kubelet[3039]: I0117 12:13:20.184077    3039 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 17 12:13:20.184215 kubelet[3039]: I0117 12:13:20.184175    3039 kubelet.go:312] "Adding apiserver pod source"
Jan 17 12:13:20.184215 kubelet[3039]: I0117 12:13:20.184194    3039 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 17 12:13:20.188963 kubelet[3039]: W0117 12:13:20.187989    3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:20.188963 kubelet[3039]: E0117 12:13:20.188050    3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:20.188963 kubelet[3039]: W0117 12:13:20.188451    3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-55&limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:20.188963 kubelet[3039]: E0117 12:13:20.188520    3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-55&limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:20.188963 kubelet[3039]: I0117 12:13:20.188611    3039 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Jan 17 12:13:20.194598 kubelet[3039]: I0117 12:13:20.194551    3039 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 17 12:13:20.196267 kubelet[3039]: W0117 12:13:20.196229    3039 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 17 12:13:20.197014 kubelet[3039]: I0117 12:13:20.196927    3039 server.go:1256] "Started kubelet"
Jan 17 12:13:20.197214 kubelet[3039]: I0117 12:13:20.197192    3039 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Jan 17 12:13:20.214455 kubelet[3039]: I0117 12:13:20.214349    3039 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 17 12:13:20.217789 kubelet[3039]: I0117 12:13:20.217242    3039 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 17 12:13:20.220997 kubelet[3039]: E0117 12:13:20.219496    3039 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.55:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-55.181b79cc6d622a09  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-55,UID:ip-172-31-29-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-55,},FirstTimestamp:2025-01-17 12:13:20.196897289 +0000 UTC m=+1.136772707,LastTimestamp:2025-01-17 12:13:20.196897289 +0000 UTC m=+1.136772707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-55,}"
Jan 17 12:13:20.222615 kubelet[3039]: I0117 12:13:20.221871    3039 server.go:461] "Adding debug handlers to kubelet server"
Jan 17 12:13:20.222959 kubelet[3039]: I0117 12:13:20.222941    3039 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 17 12:13:20.224712 kubelet[3039]: I0117 12:13:20.224689    3039 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 17 12:13:20.224921 kubelet[3039]: I0117 12:13:20.224910    3039 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Jan 17 12:13:20.226340 kubelet[3039]: I0117 12:13:20.226225    3039 reconciler_new.go:29] "Reconciler: start to sync state"
Jan 17 12:13:20.227315 kubelet[3039]: E0117 12:13:20.227016    3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-55?timeout=10s\": dial tcp 172.31.29.55:6443: connect: connection refused" interval="200ms"
Jan 17 12:13:20.231962 kubelet[3039]: W0117 12:13:20.231903    3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:20.232242 kubelet[3039]: E0117 12:13:20.232225    3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:20.233898 kubelet[3039]: I0117 12:13:20.233869    3039 factory.go:221] Registration of the systemd container factory successfully
Jan 17 12:13:20.234285 kubelet[3039]: I0117 12:13:20.234257    3039 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 17 12:13:20.239489 kubelet[3039]: E0117 12:13:20.238454    3039 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 17 12:13:20.239489 kubelet[3039]: I0117 12:13:20.238839    3039 factory.go:221] Registration of the containerd container factory successfully
Jan 17 12:13:20.268006 kubelet[3039]: I0117 12:13:20.267961    3039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 17 12:13:20.270426 kubelet[3039]: I0117 12:13:20.270394    3039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 17 12:13:20.270562 kubelet[3039]: I0117 12:13:20.270437    3039 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 17 12:13:20.270562 kubelet[3039]: I0117 12:13:20.270460    3039 kubelet.go:2329] "Starting kubelet main sync loop"
Jan 17 12:13:20.270562 kubelet[3039]: E0117 12:13:20.270520    3039 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 17 12:13:20.271547 kubelet[3039]: W0117 12:13:20.271519    3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:20.271647 kubelet[3039]: E0117 12:13:20.271559    3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:20.286501 kubelet[3039]: I0117 12:13:20.286455    3039 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 17 12:13:20.286501 kubelet[3039]: I0117 12:13:20.286485    3039 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 17 12:13:20.286501 kubelet[3039]: I0117 12:13:20.286504    3039 state_mem.go:36] "Initialized new in-memory state store"
Jan 17 12:13:20.288890 kubelet[3039]: I0117 12:13:20.288852    3039 policy_none.go:49] "None policy: Start"
Jan 17 12:13:20.289717 kubelet[3039]: I0117 12:13:20.289665    3039 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 17 12:13:20.289717 kubelet[3039]: I0117 12:13:20.289716    3039 state_mem.go:35] "Initializing new in-memory state store"
Jan 17 12:13:20.296701 kubelet[3039]: I0117 12:13:20.296669    3039 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 17 12:13:20.296958 kubelet[3039]: I0117 12:13:20.296937    3039 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 17 12:13:20.305530 kubelet[3039]: E0117 12:13:20.305500    3039 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-55\" not found"
Jan 17 12:13:20.326682 kubelet[3039]: I0117 12:13:20.326579    3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-55"
Jan 17 12:13:20.328869 kubelet[3039]: E0117 12:13:20.328839    3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.55:6443/api/v1/nodes\": dial tcp 172.31.29.55:6443: connect: connection refused" node="ip-172-31-29-55"
Jan 17 12:13:20.371310 kubelet[3039]: I0117 12:13:20.371261    3039 topology_manager.go:215] "Topology Admit Handler" podUID="75249d4d7e50c719359e4a0d0882fe03" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-55"
Jan 17 12:13:20.373068 kubelet[3039]: I0117 12:13:20.372994    3039 topology_manager.go:215] "Topology Admit Handler" podUID="2e4a490901363d04a6b682b6a7b63f5b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:20.376298 kubelet[3039]: I0117 12:13:20.376162    3039 topology_manager.go:215] "Topology Admit Handler" podUID="587dab5eaf487169b96f3e4c8bd51db5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-55"
Jan 17 12:13:20.428028 kubelet[3039]: I0117 12:13:20.427146    3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75249d4d7e50c719359e4a0d0882fe03-ca-certs\") pod \"kube-apiserver-ip-172-31-29-55\" (UID: \"75249d4d7e50c719359e4a0d0882fe03\") " pod="kube-system/kube-apiserver-ip-172-31-29-55"
Jan 17 12:13:20.428028 kubelet[3039]: I0117 12:13:20.427215    3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:20.428028 kubelet[3039]: I0117 12:13:20.427245    3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:20.428028 kubelet[3039]: I0117 12:13:20.427274    3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/587dab5eaf487169b96f3e4c8bd51db5-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-55\" (UID: \"587dab5eaf487169b96f3e4c8bd51db5\") " pod="kube-system/kube-scheduler-ip-172-31-29-55"
Jan 17 12:13:20.428028 kubelet[3039]: I0117 12:13:20.427304    3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:20.428333 kubelet[3039]: I0117 12:13:20.427331    3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75249d4d7e50c719359e4a0d0882fe03-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-55\" (UID: \"75249d4d7e50c719359e4a0d0882fe03\") " pod="kube-system/kube-apiserver-ip-172-31-29-55"
Jan 17 12:13:20.428333 kubelet[3039]: I0117 12:13:20.427357    3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75249d4d7e50c719359e4a0d0882fe03-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-55\" (UID: \"75249d4d7e50c719359e4a0d0882fe03\") " pod="kube-system/kube-apiserver-ip-172-31-29-55"
Jan 17 12:13:20.428333 kubelet[3039]: I0117 12:13:20.427384    3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:20.428333 kubelet[3039]: I0117 12:13:20.427412    3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:20.428333 kubelet[3039]: E0117 12:13:20.427972    3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-55?timeout=10s\": dial tcp 172.31.29.55:6443: connect: connection refused" interval="400ms"
Jan 17 12:13:20.532233 kubelet[3039]: I0117 12:13:20.531846    3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-55"
Jan 17 12:13:20.532233 kubelet[3039]: E0117 12:13:20.532212    3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.55:6443/api/v1/nodes\": dial tcp 172.31.29.55:6443: connect: connection refused" node="ip-172-31-29-55"
Jan 17 12:13:20.678998 containerd[2081]: time="2025-01-17T12:13:20.678938392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-55,Uid:75249d4d7e50c719359e4a0d0882fe03,Namespace:kube-system,Attempt:0,}"
Jan 17 12:13:20.689295 containerd[2081]: time="2025-01-17T12:13:20.689249251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-55,Uid:2e4a490901363d04a6b682b6a7b63f5b,Namespace:kube-system,Attempt:0,}"
Jan 17 12:13:20.692165 containerd[2081]: time="2025-01-17T12:13:20.691798856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-55,Uid:587dab5eaf487169b96f3e4c8bd51db5,Namespace:kube-system,Attempt:0,}"
Jan 17 12:13:20.828924 kubelet[3039]: E0117 12:13:20.828886    3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-55?timeout=10s\": dial tcp 172.31.29.55:6443: connect: connection refused" interval="800ms"
Jan 17 12:13:20.935282 kubelet[3039]: I0117 12:13:20.934763    3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-55"
Jan 17 12:13:20.935282 kubelet[3039]: E0117 12:13:20.935144    3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.55:6443/api/v1/nodes\": dial tcp 172.31.29.55:6443: connect: connection refused" node="ip-172-31-29-55"
Jan 17 12:13:21.082377 kubelet[3039]: W0117 12:13:21.082243    3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:21.082377 kubelet[3039]: E0117 12:13:21.082339    3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:21.136377 kubelet[3039]: W0117 12:13:21.136307    3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-55&limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:21.136534 kubelet[3039]: E0117 12:13:21.136380    3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-55&limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:21.266300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2949691215.mount: Deactivated successfully.
Jan 17 12:13:21.273658 containerd[2081]: time="2025-01-17T12:13:21.273573170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 17 12:13:21.275246 containerd[2081]: time="2025-01-17T12:13:21.275195426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Jan 17 12:13:21.275646 containerd[2081]: time="2025-01-17T12:13:21.275611952Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 17 12:13:21.278033 containerd[2081]: time="2025-01-17T12:13:21.277827558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 17 12:13:21.280818 containerd[2081]: time="2025-01-17T12:13:21.280768083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 17 12:13:21.280932 containerd[2081]: time="2025-01-17T12:13:21.280877221Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 17 12:13:21.286022 containerd[2081]: time="2025-01-17T12:13:21.284915811Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 17 12:13:21.290479 containerd[2081]: time="2025-01-17T12:13:21.287570192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 608.513246ms"
Jan 17 12:13:21.290755 containerd[2081]: time="2025-01-17T12:13:21.290715491Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 601.375708ms"
Jan 17 12:13:21.292970 containerd[2081]: time="2025-01-17T12:13:21.292924618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 601.041345ms"
Jan 17 12:13:21.293865 containerd[2081]: time="2025-01-17T12:13:21.293824330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 17 12:13:21.365302 kubelet[3039]: W0117 12:13:21.365157    3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:21.365302 kubelet[3039]: E0117 12:13:21.365233    3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:21.456436 kubelet[3039]: W0117 12:13:21.456375    3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:21.456436 kubelet[3039]: E0117 12:13:21.456439    3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:21.599724 containerd[2081]: time="2025-01-17T12:13:21.599111070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 17 12:13:21.599724 containerd[2081]: time="2025-01-17T12:13:21.599177755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 17 12:13:21.599724 containerd[2081]: time="2025-01-17T12:13:21.599224228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:21.600549 containerd[2081]: time="2025-01-17T12:13:21.600277783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:21.618078 containerd[2081]: time="2025-01-17T12:13:21.615538357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 17 12:13:21.618078 containerd[2081]: time="2025-01-17T12:13:21.615605245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 17 12:13:21.618078 containerd[2081]: time="2025-01-17T12:13:21.615626960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:21.618078 containerd[2081]: time="2025-01-17T12:13:21.615772611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:21.618078 containerd[2081]: time="2025-01-17T12:13:21.615184621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 17 12:13:21.618078 containerd[2081]: time="2025-01-17T12:13:21.615260874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 17 12:13:21.618078 containerd[2081]: time="2025-01-17T12:13:21.615284210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:21.618078 containerd[2081]: time="2025-01-17T12:13:21.615389397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:21.629938 kubelet[3039]: E0117 12:13:21.629906    3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-55?timeout=10s\": dial tcp 172.31.29.55:6443: connect: connection refused" interval="1.6s"
Jan 17 12:13:21.739020 kubelet[3039]: I0117 12:13:21.737890    3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-55"
Jan 17 12:13:21.739020 kubelet[3039]: E0117 12:13:21.738263    3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.55:6443/api/v1/nodes\": dial tcp 172.31.29.55:6443: connect: connection refused" node="ip-172-31-29-55"
Jan 17 12:13:21.755588 containerd[2081]: time="2025-01-17T12:13:21.755181272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-55,Uid:75249d4d7e50c719359e4a0d0882fe03,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d818b167b55b05e1394b91972757469e82bc825b5e847853188e207551bf606\""
Jan 17 12:13:21.757721 containerd[2081]: time="2025-01-17T12:13:21.756555636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-55,Uid:2e4a490901363d04a6b682b6a7b63f5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a794c1596d18e60bfae1f3f1facd076539b69fb9ffb9f1aeb0b85d8faaa4577e\""
Jan 17 12:13:21.767953 containerd[2081]: time="2025-01-17T12:13:21.767905821Z" level=info msg="CreateContainer within sandbox \"a794c1596d18e60bfae1f3f1facd076539b69fb9ffb9f1aeb0b85d8faaa4577e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Jan 17 12:13:21.772627 containerd[2081]: time="2025-01-17T12:13:21.772587546Z" level=info msg="CreateContainer within sandbox \"1d818b167b55b05e1394b91972757469e82bc825b5e847853188e207551bf606\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Jan 17 12:13:21.780775 containerd[2081]: time="2025-01-17T12:13:21.780713011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-55,Uid:587dab5eaf487169b96f3e4c8bd51db5,Namespace:kube-system,Attempt:0,} returns sandbox id \"087bcc1e74f9b52de71a30855846f3bbd9976e1ad6d6f6867faab48c5daf3b50\""
Jan 17 12:13:21.786687 containerd[2081]: time="2025-01-17T12:13:21.786638820Z" level=info msg="CreateContainer within sandbox \"087bcc1e74f9b52de71a30855846f3bbd9976e1ad6d6f6867faab48c5daf3b50\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Jan 17 12:13:21.813468 containerd[2081]: time="2025-01-17T12:13:21.813031932Z" level=info msg="CreateContainer within sandbox \"a794c1596d18e60bfae1f3f1facd076539b69fb9ffb9f1aeb0b85d8faaa4577e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bfaf13b5a453f9cd90e533b0ff4f68b146c40ae6071a3ba2af37b590a6e01f03\""
Jan 17 12:13:21.814864 containerd[2081]: time="2025-01-17T12:13:21.813775564Z" level=info msg="StartContainer for \"bfaf13b5a453f9cd90e533b0ff4f68b146c40ae6071a3ba2af37b590a6e01f03\""
Jan 17 12:13:21.816322 containerd[2081]: time="2025-01-17T12:13:21.816271454Z" level=info msg="CreateContainer within sandbox \"1d818b167b55b05e1394b91972757469e82bc825b5e847853188e207551bf606\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec49011f863af21991b6717f00d41fc5ea54d95d53afc0fc2babf1d432ddf61d\""
Jan 17 12:13:21.819779 containerd[2081]: time="2025-01-17T12:13:21.819730963Z" level=info msg="CreateContainer within sandbox \"087bcc1e74f9b52de71a30855846f3bbd9976e1ad6d6f6867faab48c5daf3b50\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1496bf90523fa5d32765ffc997cc240aaaa0bf5707aef6553330e7c2545821a2\""
Jan 17 12:13:21.820532 containerd[2081]: time="2025-01-17T12:13:21.820501548Z" level=info msg="StartContainer for \"ec49011f863af21991b6717f00d41fc5ea54d95d53afc0fc2babf1d432ddf61d\""
Jan 17 12:13:21.827359 containerd[2081]: time="2025-01-17T12:13:21.827166488Z" level=info msg="StartContainer for \"1496bf90523fa5d32765ffc997cc240aaaa0bf5707aef6553330e7c2545821a2\""
Jan 17 12:13:21.988410 containerd[2081]: time="2025-01-17T12:13:21.988301215Z" level=info msg="StartContainer for \"ec49011f863af21991b6717f00d41fc5ea54d95d53afc0fc2babf1d432ddf61d\" returns successfully"
Jan 17 12:13:21.992202 containerd[2081]: time="2025-01-17T12:13:21.992156445Z" level=info msg="StartContainer for \"bfaf13b5a453f9cd90e533b0ff4f68b146c40ae6071a3ba2af37b590a6e01f03\" returns successfully"
Jan 17 12:13:22.045444 containerd[2081]: time="2025-01-17T12:13:22.045288231Z" level=info msg="StartContainer for \"1496bf90523fa5d32765ffc997cc240aaaa0bf5707aef6553330e7c2545821a2\" returns successfully"
Jan 17 12:13:22.186625 kubelet[3039]: E0117 12:13:22.186596    3039 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.55:6443: connect: connection refused
Jan 17 12:13:23.342427 kubelet[3039]: I0117 12:13:23.342396    3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-55"
Jan 17 12:13:25.024497 update_engine[2062]: I20250117 12:13:25.024413  2062 update_attempter.cc:509] Updating boot flags...
Jan 17 12:13:25.334008 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3320)
Jan 17 12:13:25.990665 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3320)
Jan 17 12:13:26.682187 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3320)
Jan 17 12:13:27.110841 kubelet[3039]: E0117 12:13:27.110786    3039 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-55\" not found" node="ip-172-31-29-55"
Jan 17 12:13:27.203099 kubelet[3039]: I0117 12:13:27.190882    3039 apiserver.go:52] "Watching apiserver"
Jan 17 12:13:27.227679 kubelet[3039]: I0117 12:13:27.225773    3039 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Jan 17 12:13:27.274730 kubelet[3039]: I0117 12:13:27.274401    3039 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-55"
Jan 17 12:13:27.288365 kubelet[3039]: E0117 12:13:27.277591    3039 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-55.181b79cc6d622a09  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-55,UID:ip-172-31-29-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-55,},FirstTimestamp:2025-01-17 12:13:20.196897289 +0000 UTC m=+1.136772707,LastTimestamp:2025-01-17 12:13:20.196897289 +0000 UTC m=+1.136772707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-55,}"
Jan 17 12:13:27.361284 kubelet[3039]: E0117 12:13:27.361165    3039 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-55.181b79cc6fdbe803  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-55,UID:ip-172-31-29-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-29-55,},FirstTimestamp:2025-01-17 12:13:20.238430211 +0000 UTC m=+1.178305624,LastTimestamp:2025-01-17 12:13:20.238430211 +0000 UTC m=+1.178305624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-55,}"
Jan 17 12:13:30.343546 kubelet[3039]: I0117 12:13:30.343336    3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-55" podStartSLOduration=2.343267241 podStartE2EDuration="2.343267241s" podCreationTimestamp="2025-01-17 12:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:13:30.339461298 +0000 UTC m=+11.279336714" watchObservedRunningTime="2025-01-17 12:13:30.343267241 +0000 UTC m=+11.283142652"
Jan 17 12:13:30.758581 systemd[1]: Reloading requested from client PID 3575 ('systemctl') (unit session-7.scope)...
Jan 17 12:13:30.758602 systemd[1]: Reloading...
Jan 17 12:13:31.046533 zram_generator::config[3621]: No configuration found.
Jan 17 12:13:31.273586 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 17 12:13:31.508888 systemd[1]: Reloading finished in 749 ms.
Jan 17 12:13:31.557967 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 17 12:13:31.559507 kubelet[3039]: I0117 12:13:31.558901    3039 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 17 12:13:31.578636 systemd[1]: kubelet.service: Deactivated successfully.
Jan 17 12:13:31.579063 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 17 12:13:31.591510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 17 12:13:31.924812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 17 12:13:31.983698 (kubelet)[3681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 17 12:13:32.100745 kubelet[3681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 17 12:13:32.100745 kubelet[3681]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 17 12:13:32.100745 kubelet[3681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 17 12:13:32.102828 kubelet[3681]: I0117 12:13:32.100812    3681 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 17 12:13:32.108683 kubelet[3681]: I0117 12:13:32.108647    3681 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Jan 17 12:13:32.108683 kubelet[3681]: I0117 12:13:32.108676    3681 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 17 12:13:32.109029 kubelet[3681]: I0117 12:13:32.108959    3681 server.go:919] "Client rotation is on, will bootstrap in background"
Jan 17 12:13:32.112697 kubelet[3681]: I0117 12:13:32.112658    3681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jan 17 12:13:32.124799 kubelet[3681]: I0117 12:13:32.124181    3681 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 17 12:13:32.133167 kubelet[3681]: I0117 12:13:32.133137    3681 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 17 12:13:32.134483 kubelet[3681]: I0117 12:13:32.134459    3681 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 17 12:13:32.135105 kubelet[3681]: I0117 12:13:32.135082    3681 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 17 12:13:32.135454 kubelet[3681]: I0117 12:13:32.135118    3681 topology_manager.go:138] "Creating topology manager with none policy"
Jan 17 12:13:32.135454 kubelet[3681]: I0117 12:13:32.135276    3681 container_manager_linux.go:301] "Creating device plugin manager"
Jan 17 12:13:32.135454 kubelet[3681]: I0117 12:13:32.135386    3681 state_mem.go:36] "Initialized new in-memory state store"
Jan 17 12:13:32.135595 kubelet[3681]: I0117 12:13:32.135514    3681 kubelet.go:396] "Attempting to sync node with API server"
Jan 17 12:13:32.135595 kubelet[3681]: I0117 12:13:32.135546    3681 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 17 12:13:32.135595 kubelet[3681]: I0117 12:13:32.135581    3681 kubelet.go:312] "Adding apiserver pod source"
Jan 17 12:13:32.138219 kubelet[3681]: I0117 12:13:32.135603    3681 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 17 12:13:32.139155 kubelet[3681]: I0117 12:13:32.138708    3681 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Jan 17 12:13:32.140349 kubelet[3681]: I0117 12:13:32.140280    3681 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 17 12:13:32.142372 kubelet[3681]: I0117 12:13:32.142291    3681 server.go:1256] "Started kubelet"
Jan 17 12:13:32.152099 kubelet[3681]: I0117 12:13:32.152053    3681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 17 12:13:32.175414 kubelet[3681]: I0117 12:13:32.174474    3681 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Jan 17 12:13:32.175414 kubelet[3681]: I0117 12:13:32.175260    3681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 17 12:13:32.178968 kubelet[3681]: I0117 12:13:32.177804    3681 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 17 12:13:32.181368 kubelet[3681]: I0117 12:13:32.180199    3681 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Jan 17 12:13:32.181567 kubelet[3681]: I0117 12:13:32.181553    3681 reconciler_new.go:29] "Reconciler: start to sync state"
Jan 17 12:13:32.186072 kubelet[3681]: I0117 12:13:32.184516    3681 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 17 12:13:32.214594 kubelet[3681]: I0117 12:13:32.210368    3681 server.go:461] "Adding debug handlers to kubelet server"
Jan 17 12:13:32.223915 sudo[3700]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Jan 17 12:13:32.226621 sudo[3700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0)
Jan 17 12:13:32.249693 kubelet[3681]: I0117 12:13:32.249652    3681 factory.go:221] Registration of the containerd container factory successfully
Jan 17 12:13:32.249693 kubelet[3681]: I0117 12:13:32.249678    3681 factory.go:221] Registration of the systemd container factory successfully
Jan 17 12:13:32.249938 kubelet[3681]: I0117 12:13:32.249805    3681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 17 12:13:32.250699 kubelet[3681]: I0117 12:13:32.250647    3681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 17 12:13:32.276533 kubelet[3681]: I0117 12:13:32.276166    3681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 17 12:13:32.276533 kubelet[3681]: I0117 12:13:32.276209    3681 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 17 12:13:32.276533 kubelet[3681]: I0117 12:13:32.276231    3681 kubelet.go:2329] "Starting kubelet main sync loop"
Jan 17 12:13:32.276533 kubelet[3681]: E0117 12:13:32.276305    3681 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 17 12:13:32.297079 kubelet[3681]: E0117 12:13:32.297048    3681 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache"
Jan 17 12:13:32.301460 kubelet[3681]: E0117 12:13:32.301423    3681 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 17 12:13:32.313715 kubelet[3681]: I0117 12:13:32.313670    3681 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-55"
Jan 17 12:13:32.333734 kubelet[3681]: I0117 12:13:32.333678    3681 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-55"
Jan 17 12:13:32.334624 kubelet[3681]: I0117 12:13:32.334052    3681 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-55"
Jan 17 12:13:32.376707 kubelet[3681]: E0117 12:13:32.376674    3681 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jan 17 12:13:32.476129 kubelet[3681]: I0117 12:13:32.476022    3681 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 17 12:13:32.476129 kubelet[3681]: I0117 12:13:32.476050    3681 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 17 12:13:32.476129 kubelet[3681]: I0117 12:13:32.476069    3681 state_mem.go:36] "Initialized new in-memory state store"
Jan 17 12:13:32.477006 kubelet[3681]: I0117 12:13:32.476703    3681 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Jan 17 12:13:32.477006 kubelet[3681]: I0117 12:13:32.476736    3681 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Jan 17 12:13:32.477006 kubelet[3681]: I0117 12:13:32.476746    3681 policy_none.go:49] "None policy: Start"
Jan 17 12:13:32.479876 kubelet[3681]: I0117 12:13:32.479859    3681 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 17 12:13:32.479966 kubelet[3681]: I0117 12:13:32.479889    3681 state_mem.go:35] "Initializing new in-memory state store"
Jan 17 12:13:32.480401 kubelet[3681]: I0117 12:13:32.480380    3681 state_mem.go:75] "Updated machine memory state"
Jan 17 12:13:32.486461 kubelet[3681]: I0117 12:13:32.486089    3681 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 17 12:13:32.490673 kubelet[3681]: I0117 12:13:32.490641    3681 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 17 12:13:32.580804 kubelet[3681]: I0117 12:13:32.576879    3681 topology_manager.go:215] "Topology Admit Handler" podUID="75249d4d7e50c719359e4a0d0882fe03" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-55"
Jan 17 12:13:32.580804 kubelet[3681]: I0117 12:13:32.579035    3681 topology_manager.go:215] "Topology Admit Handler" podUID="2e4a490901363d04a6b682b6a7b63f5b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:32.580804 kubelet[3681]: I0117 12:13:32.579107    3681 topology_manager.go:215] "Topology Admit Handler" podUID="587dab5eaf487169b96f3e4c8bd51db5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-55"
Jan 17 12:13:32.586844 kubelet[3681]: I0117 12:13:32.586788    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:32.592927 kubelet[3681]: I0117 12:13:32.592506    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:32.593698 kubelet[3681]: I0117 12:13:32.593675    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:32.594041 kubelet[3681]: I0117 12:13:32.594025    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:32.594841 kubelet[3681]: I0117 12:13:32.594810    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/587dab5eaf487169b96f3e4c8bd51db5-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-55\" (UID: \"587dab5eaf487169b96f3e4c8bd51db5\") " pod="kube-system/kube-scheduler-ip-172-31-29-55"
Jan 17 12:13:32.596387 kubelet[3681]: I0117 12:13:32.594853    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e4a490901363d04a6b682b6a7b63f5b-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-55\" (UID: \"2e4a490901363d04a6b682b6a7b63f5b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-55"
Jan 17 12:13:32.596387 kubelet[3681]: I0117 12:13:32.594891    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75249d4d7e50c719359e4a0d0882fe03-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-55\" (UID: \"75249d4d7e50c719359e4a0d0882fe03\") " pod="kube-system/kube-apiserver-ip-172-31-29-55"
Jan 17 12:13:32.596387 kubelet[3681]: I0117 12:13:32.594925    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75249d4d7e50c719359e4a0d0882fe03-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-55\" (UID: \"75249d4d7e50c719359e4a0d0882fe03\") " pod="kube-system/kube-apiserver-ip-172-31-29-55"
Jan 17 12:13:32.596387 kubelet[3681]: I0117 12:13:32.594946    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75249d4d7e50c719359e4a0d0882fe03-ca-certs\") pod \"kube-apiserver-ip-172-31-29-55\" (UID: \"75249d4d7e50c719359e4a0d0882fe03\") " pod="kube-system/kube-apiserver-ip-172-31-29-55"
Jan 17 12:13:32.596387 kubelet[3681]: E0117 12:13:32.594774    3681 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-55\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-55"
Jan 17 12:13:32.596387 kubelet[3681]: E0117 12:13:32.595127    3681 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-29-55\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-55"
Jan 17 12:13:33.141010 kubelet[3681]: I0117 12:13:33.140945    3681 apiserver.go:52] "Watching apiserver"
Jan 17 12:13:33.183915 kubelet[3681]: I0117 12:13:33.180725    3681 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Jan 17 12:13:33.255718 sudo[3700]: pam_unix(sudo:session): session closed for user root
Jan 17 12:13:33.377101 kubelet[3681]: I0117 12:13:33.376956    3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-55" podStartSLOduration=3.376695482 podStartE2EDuration="3.376695482s" podCreationTimestamp="2025-01-17 12:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:13:33.360677972 +0000 UTC m=+1.365219206" watchObservedRunningTime="2025-01-17 12:13:33.376695482 +0000 UTC m=+1.381236711"
Jan 17 12:13:33.396180 kubelet[3681]: I0117 12:13:33.393879    3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-55" podStartSLOduration=1.393417697 podStartE2EDuration="1.393417697s" podCreationTimestamp="2025-01-17 12:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:13:33.377749618 +0000 UTC m=+1.382290852" watchObservedRunningTime="2025-01-17 12:13:33.393417697 +0000 UTC m=+1.397958932"
Jan 17 12:13:36.221864 sudo[2438]: pam_unix(sudo:session): session closed for user root
Jan 17 12:13:36.245687 sshd[2434]: pam_unix(sshd:session): session closed for user core
Jan 17 12:13:36.252344 systemd[1]: sshd@6-172.31.29.55:22-139.178.89.65:44350.service: Deactivated successfully.
Jan 17 12:13:36.260348 systemd-logind[2055]: Session 7 logged out. Waiting for processes to exit.
Jan 17 12:13:36.263877 systemd[1]: session-7.scope: Deactivated successfully.
Jan 17 12:13:36.266642 systemd-logind[2055]: Removed session 7.
Jan 17 12:13:44.159446 kubelet[3681]: I0117 12:13:44.159414    3681 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Jan 17 12:13:44.162096 containerd[2081]: time="2025-01-17T12:13:44.162046582Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 17 12:13:44.164337 kubelet[3681]: I0117 12:13:44.164308    3681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Jan 17 12:13:44.182043 kubelet[3681]: I0117 12:13:44.179018    3681 topology_manager.go:215] "Topology Admit Handler" podUID="46f9bfec-05fe-43d0-97e4-009de82ae92e" podNamespace="kube-system" podName="cilium-operator-5cc964979-br66s"
Jan 17 12:13:44.299498 kubelet[3681]: I0117 12:13:44.299406    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg6zm\" (UniqueName: \"kubernetes.io/projected/46f9bfec-05fe-43d0-97e4-009de82ae92e-kube-api-access-kg6zm\") pod \"cilium-operator-5cc964979-br66s\" (UID: \"46f9bfec-05fe-43d0-97e4-009de82ae92e\") " pod="kube-system/cilium-operator-5cc964979-br66s"
Jan 17 12:13:44.299498 kubelet[3681]: I0117 12:13:44.299467    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46f9bfec-05fe-43d0-97e4-009de82ae92e-cilium-config-path\") pod \"cilium-operator-5cc964979-br66s\" (UID: \"46f9bfec-05fe-43d0-97e4-009de82ae92e\") " pod="kube-system/cilium-operator-5cc964979-br66s"
Jan 17 12:13:44.343269 kubelet[3681]: I0117 12:13:44.343029    3681 topology_manager.go:215] "Topology Admit Handler" podUID="c0b28e0a-77bc-4188-bcaa-9b69a456fb41" podNamespace="kube-system" podName="kube-proxy-drtbp"
Jan 17 12:13:44.359690 kubelet[3681]: I0117 12:13:44.359323    3681 topology_manager.go:215] "Topology Admit Handler" podUID="70e8a5de-6357-4fbc-830e-df6a4e2d80ba" podNamespace="kube-system" podName="cilium-5bg4m"
Jan 17 12:13:44.400040 kubelet[3681]: I0117 12:13:44.399937    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-xtables-lock\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.400040 kubelet[3681]: I0117 12:13:44.400018    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdtsr\" (UniqueName: \"kubernetes.io/projected/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-kube-api-access-mdtsr\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.401774 kubelet[3681]: I0117 12:13:44.400077    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-hostproc\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.401774 kubelet[3681]: I0117 12:13:44.400114    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-host-proc-sys-net\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.401774 kubelet[3681]: I0117 12:13:44.400146    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-cgroup\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.401774 kubelet[3681]: I0117 12:13:44.400261    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-run\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.401774 kubelet[3681]: I0117 12:13:44.400302    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqfr\" (UniqueName: \"kubernetes.io/projected/c0b28e0a-77bc-4188-bcaa-9b69a456fb41-kube-api-access-rcqfr\") pod \"kube-proxy-drtbp\" (UID: \"c0b28e0a-77bc-4188-bcaa-9b69a456fb41\") " pod="kube-system/kube-proxy-drtbp"
Jan 17 12:13:44.401774 kubelet[3681]: I0117 12:13:44.400358    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0b28e0a-77bc-4188-bcaa-9b69a456fb41-lib-modules\") pod \"kube-proxy-drtbp\" (UID: \"c0b28e0a-77bc-4188-bcaa-9b69a456fb41\") " pod="kube-system/kube-proxy-drtbp"
Jan 17 12:13:44.402080 kubelet[3681]: I0117 12:13:44.400398    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0b28e0a-77bc-4188-bcaa-9b69a456fb41-xtables-lock\") pod \"kube-proxy-drtbp\" (UID: \"c0b28e0a-77bc-4188-bcaa-9b69a456fb41\") " pod="kube-system/kube-proxy-drtbp"
Jan 17 12:13:44.402080 kubelet[3681]: I0117 12:13:44.400454    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-clustermesh-secrets\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.402080 kubelet[3681]: I0117 12:13:44.400500    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-hubble-tls\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.402080 kubelet[3681]: I0117 12:13:44.400534    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-etc-cni-netd\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.402080 kubelet[3681]: I0117 12:13:44.400568    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-host-proc-sys-kernel\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.402080 kubelet[3681]: I0117 12:13:44.400602    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-bpf-maps\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.402357 kubelet[3681]: I0117 12:13:44.400632    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0b28e0a-77bc-4188-bcaa-9b69a456fb41-kube-proxy\") pod \"kube-proxy-drtbp\" (UID: \"c0b28e0a-77bc-4188-bcaa-9b69a456fb41\") " pod="kube-system/kube-proxy-drtbp"
Jan 17 12:13:44.402357 kubelet[3681]: I0117 12:13:44.400668    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cni-path\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.402357 kubelet[3681]: I0117 12:13:44.400716    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-lib-modules\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.402357 kubelet[3681]: I0117 12:13:44.400753    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-config-path\") pod \"cilium-5bg4m\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") " pod="kube-system/cilium-5bg4m"
Jan 17 12:13:44.493422 containerd[2081]: time="2025-01-17T12:13:44.493310896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-br66s,Uid:46f9bfec-05fe-43d0-97e4-009de82ae92e,Namespace:kube-system,Attempt:0,}"
Jan 17 12:13:44.592095 containerd[2081]: time="2025-01-17T12:13:44.591885753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 17 12:13:44.594020 containerd[2081]: time="2025-01-17T12:13:44.592758231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 17 12:13:44.594020 containerd[2081]: time="2025-01-17T12:13:44.592779400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:44.594020 containerd[2081]: time="2025-01-17T12:13:44.592885063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:44.658164 containerd[2081]: time="2025-01-17T12:13:44.657956911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-drtbp,Uid:c0b28e0a-77bc-4188-bcaa-9b69a456fb41,Namespace:kube-system,Attempt:0,}"
Jan 17 12:13:44.670672 containerd[2081]: time="2025-01-17T12:13:44.670626163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-br66s,Uid:46f9bfec-05fe-43d0-97e4-009de82ae92e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\""
Jan 17 12:13:44.675447 containerd[2081]: time="2025-01-17T12:13:44.674731745Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Jan 17 12:13:44.686459 containerd[2081]: time="2025-01-17T12:13:44.686425621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5bg4m,Uid:70e8a5de-6357-4fbc-830e-df6a4e2d80ba,Namespace:kube-system,Attempt:0,}"
Jan 17 12:13:44.717250 containerd[2081]: time="2025-01-17T12:13:44.716777284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 17 12:13:44.717250 containerd[2081]: time="2025-01-17T12:13:44.716853004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 17 12:13:44.717250 containerd[2081]: time="2025-01-17T12:13:44.716878214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:44.717250 containerd[2081]: time="2025-01-17T12:13:44.717121074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:44.775477 containerd[2081]: time="2025-01-17T12:13:44.775202932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 17 12:13:44.775477 containerd[2081]: time="2025-01-17T12:13:44.775274789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 17 12:13:44.775905 containerd[2081]: time="2025-01-17T12:13:44.775337186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:44.775905 containerd[2081]: time="2025-01-17T12:13:44.775582889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:13:44.817713 containerd[2081]: time="2025-01-17T12:13:44.817673542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-drtbp,Uid:c0b28e0a-77bc-4188-bcaa-9b69a456fb41,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed5f39c83f0b197c25179f52fd8f60fbf028e43d2f6512bd365943cb9bf49f5\""
Jan 17 12:13:44.827472 containerd[2081]: time="2025-01-17T12:13:44.827430022Z" level=info msg="CreateContainer within sandbox \"aed5f39c83f0b197c25179f52fd8f60fbf028e43d2f6512bd365943cb9bf49f5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 17 12:13:44.850129 containerd[2081]: time="2025-01-17T12:13:44.850069698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5bg4m,Uid:70e8a5de-6357-4fbc-830e-df6a4e2d80ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\""
Jan 17 12:13:44.861088 containerd[2081]: time="2025-01-17T12:13:44.860959015Z" level=info msg="CreateContainer within sandbox \"aed5f39c83f0b197c25179f52fd8f60fbf028e43d2f6512bd365943cb9bf49f5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"564f7a05c9f853f304121e5af69641a88404ceadc8504dfbb25a1c71286a48c1\""
Jan 17 12:13:44.863833 containerd[2081]: time="2025-01-17T12:13:44.861971804Z" level=info msg="StartContainer for \"564f7a05c9f853f304121e5af69641a88404ceadc8504dfbb25a1c71286a48c1\""
Jan 17 12:13:44.948150 containerd[2081]: time="2025-01-17T12:13:44.948105643Z" level=info msg="StartContainer for \"564f7a05c9f853f304121e5af69641a88404ceadc8504dfbb25a1c71286a48c1\" returns successfully"
Jan 17 12:13:46.442052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202398726.mount: Deactivated successfully.
Jan 17 12:13:48.184830 containerd[2081]: time="2025-01-17T12:13:48.184781043Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:48.186647 containerd[2081]: time="2025-01-17T12:13:48.186489283Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907177"
Jan 17 12:13:48.192263 containerd[2081]: time="2025-01-17T12:13:48.192189586Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:13:48.195995 containerd[2081]: time="2025-01-17T12:13:48.195126433Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.520349696s"
Jan 17 12:13:48.195995 containerd[2081]: time="2025-01-17T12:13:48.195500096Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Jan 17 12:13:48.196948 containerd[2081]: time="2025-01-17T12:13:48.196843094Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Jan 17 12:13:48.215404 containerd[2081]: time="2025-01-17T12:13:48.215357761Z" level=info msg="CreateContainer within sandbox \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Jan 17 12:13:48.249296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263524157.mount: Deactivated successfully.
Jan 17 12:13:48.252191 containerd[2081]: time="2025-01-17T12:13:48.249505779Z" level=info msg="CreateContainer within sandbox \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\""
Jan 17 12:13:48.253676 containerd[2081]: time="2025-01-17T12:13:48.253389872Z" level=info msg="StartContainer for \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\""
Jan 17 12:13:48.555637 containerd[2081]: time="2025-01-17T12:13:48.552589266Z" level=info msg="StartContainer for \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\" returns successfully"
Jan 17 12:13:49.579987 kubelet[3681]: I0117 12:13:49.577017    3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-drtbp" podStartSLOduration=5.57694693 podStartE2EDuration="5.57694693s" podCreationTimestamp="2025-01-17 12:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:13:45.421447896 +0000 UTC m=+13.425989131" watchObservedRunningTime="2025-01-17 12:13:49.57694693 +0000 UTC m=+17.581488158"
Jan 17 12:13:59.908391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892337400.mount: Deactivated successfully.
Jan 17 12:14:02.836001 systemd-resolved[1968]: Under memory pressure, flushing caches.
Jan 17 12:14:02.839439 systemd-journald[1567]: Under memory pressure, flushing caches.
Jan 17 12:14:02.836118 systemd-resolved[1968]: Flushed all caches.
Jan 17 12:14:04.881267 systemd-journald[1567]: Under memory pressure, flushing caches.
Jan 17 12:14:04.878322 systemd-resolved[1968]: Under memory pressure, flushing caches.
Jan 17 12:14:04.878358 systemd-resolved[1968]: Flushed all caches.
Jan 17 12:14:06.394394 containerd[2081]: time="2025-01-17T12:14:06.394342838Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:14:06.395912 containerd[2081]: time="2025-01-17T12:14:06.395667204Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735367"
Jan 17 12:14:06.397853 containerd[2081]: time="2025-01-17T12:14:06.397803627Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 17 12:14:06.402866 containerd[2081]: time="2025-01-17T12:14:06.402814940Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.205926441s"
Jan 17 12:14:06.403160 containerd[2081]: time="2025-01-17T12:14:06.403039750Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Jan 17 12:14:06.691326 containerd[2081]: time="2025-01-17T12:14:06.691034292Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Jan 17 12:14:06.765539 containerd[2081]: time="2025-01-17T12:14:06.765236365Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\""
Jan 17 12:14:06.805803 containerd[2081]: time="2025-01-17T12:14:06.805767278Z" level=info msg="StartContainer for \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\""
Jan 17 12:14:06.926007 systemd-journald[1567]: Under memory pressure, flushing caches.
Jan 17 12:14:06.925969 systemd-resolved[1968]: Under memory pressure, flushing caches.
Jan 17 12:14:06.926357 systemd-resolved[1968]: Flushed all caches.
Jan 17 12:14:07.034895 containerd[2081]: time="2025-01-17T12:14:07.034668143Z" level=info msg="StartContainer for \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\" returns successfully"
Jan 17 12:14:07.318831 containerd[2081]: time="2025-01-17T12:14:07.292362472Z" level=info msg="shim disconnected" id=f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d namespace=k8s.io
Jan 17 12:14:07.318831 containerd[2081]: time="2025-01-17T12:14:07.318617084Z" level=warning msg="cleaning up after shim disconnected" id=f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d namespace=k8s.io
Jan 17 12:14:07.318831 containerd[2081]: time="2025-01-17T12:14:07.318636997Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:14:07.749440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d-rootfs.mount: Deactivated successfully.
Jan 17 12:14:07.893954 containerd[2081]: time="2025-01-17T12:14:07.893909063Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Jan 17 12:14:07.947269 containerd[2081]: time="2025-01-17T12:14:07.947220222Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\""
Jan 17 12:14:07.950249 containerd[2081]: time="2025-01-17T12:14:07.949055944Z" level=info msg="StartContainer for \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\""
Jan 17 12:14:08.002540 kubelet[3681]: I0117 12:14:08.002398    3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-br66s" podStartSLOduration=20.45547009 podStartE2EDuration="23.979067309s" podCreationTimestamp="2025-01-17 12:13:44 +0000 UTC" firstStartedPulling="2025-01-17 12:13:44.672760499 +0000 UTC m=+12.677301718" lastFinishedPulling="2025-01-17 12:13:48.196357711 +0000 UTC m=+16.200898937" observedRunningTime="2025-01-17 12:13:49.594105022 +0000 UTC m=+17.598646256" watchObservedRunningTime="2025-01-17 12:14:07.979067309 +0000 UTC m=+35.983608542"
Jan 17 12:14:08.035908 systemd[1]: run-containerd-runc-k8s.io-dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724-runc.pcykmV.mount: Deactivated successfully.
Jan 17 12:14:08.069584 containerd[2081]: time="2025-01-17T12:14:08.069533813Z" level=info msg="StartContainer for \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\" returns successfully"
Jan 17 12:14:08.086144 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 17 12:14:08.086572 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 17 12:14:08.086663 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Jan 17 12:14:08.095959 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 17 12:14:08.130541 containerd[2081]: time="2025-01-17T12:14:08.130222154Z" level=info msg="shim disconnected" id=dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724 namespace=k8s.io
Jan 17 12:14:08.131562 containerd[2081]: time="2025-01-17T12:14:08.131515158Z" level=warning msg="cleaning up after shim disconnected" id=dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724 namespace=k8s.io
Jan 17 12:14:08.131675 containerd[2081]: time="2025-01-17T12:14:08.131659963Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:14:08.143049 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 17 12:14:08.753554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724-rootfs.mount: Deactivated successfully.
Jan 17 12:14:08.873117 containerd[2081]: time="2025-01-17T12:14:08.871825907Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Jan 17 12:14:08.930778 containerd[2081]: time="2025-01-17T12:14:08.930601253Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\""
Jan 17 12:14:08.934917 containerd[2081]: time="2025-01-17T12:14:08.931812323Z" level=info msg="StartContainer for \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\""
Jan 17 12:14:09.057061 containerd[2081]: time="2025-01-17T12:14:09.056860762Z" level=info msg="StartContainer for \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\" returns successfully"
Jan 17 12:14:09.104429 containerd[2081]: time="2025-01-17T12:14:09.104311333Z" level=info msg="shim disconnected" id=7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377 namespace=k8s.io
Jan 17 12:14:09.104429 containerd[2081]: time="2025-01-17T12:14:09.104425086Z" level=warning msg="cleaning up after shim disconnected" id=7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377 namespace=k8s.io
Jan 17 12:14:09.104429 containerd[2081]: time="2025-01-17T12:14:09.104437889Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:14:09.751803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377-rootfs.mount: Deactivated successfully.
Jan 17 12:14:09.968298 containerd[2081]: time="2025-01-17T12:14:09.966616969Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Jan 17 12:14:10.073096 containerd[2081]: time="2025-01-17T12:14:10.072379114Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\""
Jan 17 12:14:10.075222 containerd[2081]: time="2025-01-17T12:14:10.073940004Z" level=info msg="StartContainer for \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\""
Jan 17 12:14:10.222390 containerd[2081]: time="2025-01-17T12:14:10.222338937Z" level=info msg="StartContainer for \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\" returns successfully"
Jan 17 12:14:10.252386 containerd[2081]: time="2025-01-17T12:14:10.252320595Z" level=info msg="shim disconnected" id=4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0 namespace=k8s.io
Jan 17 12:14:10.252820 containerd[2081]: time="2025-01-17T12:14:10.252467779Z" level=warning msg="cleaning up after shim disconnected" id=4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0 namespace=k8s.io
Jan 17 12:14:10.252820 containerd[2081]: time="2025-01-17T12:14:10.252480311Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:14:10.749891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0-rootfs.mount: Deactivated successfully.
Jan 17 12:14:10.903443 containerd[2081]: time="2025-01-17T12:14:10.903387178Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Jan 17 12:14:10.950783 containerd[2081]: time="2025-01-17T12:14:10.950599356Z" level=info msg="CreateContainer within sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\""
Jan 17 12:14:10.955620 containerd[2081]: time="2025-01-17T12:14:10.955305206Z" level=info msg="StartContainer for \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\""
Jan 17 12:14:11.053847 containerd[2081]: time="2025-01-17T12:14:11.052073917Z" level=info msg="StartContainer for \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\" returns successfully"
Jan 17 12:14:11.510513 kubelet[3681]: I0117 12:14:11.509033    3681 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Jan 17 12:14:11.630424 kubelet[3681]: I0117 12:14:11.630371    3681 topology_manager.go:215] "Topology Admit Handler" podUID="3a025f83-6d77-4151-9a5e-4e06abfc4c77" podNamespace="kube-system" podName="coredns-76f75df574-rrmq2"
Jan 17 12:14:11.636998 kubelet[3681]: I0117 12:14:11.635521    3681 topology_manager.go:215] "Topology Admit Handler" podUID="fbce4973-bdcf-4418-8a4e-7c463617c72e" podNamespace="kube-system" podName="coredns-76f75df574-r6t4c"
Jan 17 12:14:11.769333 kubelet[3681]: I0117 12:14:11.769213    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w54n9\" (UniqueName: \"kubernetes.io/projected/fbce4973-bdcf-4418-8a4e-7c463617c72e-kube-api-access-w54n9\") pod \"coredns-76f75df574-r6t4c\" (UID: \"fbce4973-bdcf-4418-8a4e-7c463617c72e\") " pod="kube-system/coredns-76f75df574-r6t4c"
Jan 17 12:14:11.773830 kubelet[3681]: I0117 12:14:11.773710    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a025f83-6d77-4151-9a5e-4e06abfc4c77-config-volume\") pod \"coredns-76f75df574-rrmq2\" (UID: \"3a025f83-6d77-4151-9a5e-4e06abfc4c77\") " pod="kube-system/coredns-76f75df574-rrmq2"
Jan 17 12:14:11.774024 kubelet[3681]: I0117 12:14:11.773884    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbce4973-bdcf-4418-8a4e-7c463617c72e-config-volume\") pod \"coredns-76f75df574-r6t4c\" (UID: \"fbce4973-bdcf-4418-8a4e-7c463617c72e\") " pod="kube-system/coredns-76f75df574-r6t4c"
Jan 17 12:14:11.774024 kubelet[3681]: I0117 12:14:11.773927    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vnj\" (UniqueName: \"kubernetes.io/projected/3a025f83-6d77-4151-9a5e-4e06abfc4c77-kube-api-access-s9vnj\") pod \"coredns-76f75df574-rrmq2\" (UID: \"3a025f83-6d77-4151-9a5e-4e06abfc4c77\") " pod="kube-system/coredns-76f75df574-rrmq2"
Jan 17 12:14:11.969528 containerd[2081]: time="2025-01-17T12:14:11.968080399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rrmq2,Uid:3a025f83-6d77-4151-9a5e-4e06abfc4c77,Namespace:kube-system,Attempt:0,}"
Jan 17 12:14:11.983779 containerd[2081]: time="2025-01-17T12:14:11.983660804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r6t4c,Uid:fbce4973-bdcf-4418-8a4e-7c463617c72e,Namespace:kube-system,Attempt:0,}"
Jan 17 12:14:12.814208 systemd-resolved[1968]: Under memory pressure, flushing caches.
Jan 17 12:14:12.814237 systemd-resolved[1968]: Flushed all caches.
Jan 17 12:14:12.815327 systemd-journald[1567]: Under memory pressure, flushing caches.
Jan 17 12:14:14.422293 systemd-networkd[1644]: cilium_host: Link UP
Jan 17 12:14:14.423616 systemd-networkd[1644]: cilium_net: Link UP
Jan 17 12:14:14.425857 (udev-worker)[4461]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:14:14.427522 systemd-networkd[1644]: cilium_net: Gained carrier
Jan 17 12:14:14.428123 systemd-networkd[1644]: cilium_host: Gained carrier
Jan 17 12:14:14.430075 (udev-worker)[4459]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:14:14.517455 systemd-networkd[1644]: cilium_host: Gained IPv6LL
Jan 17 12:14:14.614837 systemd-networkd[1644]: cilium_net: Gained IPv6LL
Jan 17 12:14:14.723472 (udev-worker)[4533]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:14:14.738853 systemd-networkd[1644]: cilium_vxlan: Link UP
Jan 17 12:14:14.738863 systemd-networkd[1644]: cilium_vxlan: Gained carrier
Jan 17 12:14:16.717333 systemd-networkd[1644]: cilium_vxlan: Gained IPv6LL
Jan 17 12:14:16.990257 systemd[1]: Started sshd@7-172.31.29.55:22-139.178.89.65:51730.service - OpenSSH per-connection server daemon (139.178.89.65:51730).
Jan 17 12:14:17.250732 sshd[4586]: Accepted publickey for core from 139.178.89.65 port 51730 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:17.253067 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:17.268292 systemd-logind[2055]: New session 8 of user core.
Jan 17 12:14:17.280862 systemd[1]: Started session-8.scope - Session 8 of User core.
Jan 17 12:14:18.306717 sshd[4586]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:18.321490 systemd-logind[2055]: Session 8 logged out. Waiting for processes to exit.
Jan 17 12:14:18.322306 systemd[1]: sshd@7-172.31.29.55:22-139.178.89.65:51730.service: Deactivated successfully.
Jan 17 12:14:18.328137 systemd[1]: session-8.scope: Deactivated successfully.
Jan 17 12:14:18.329922 systemd-logind[2055]: Removed session 8.
Jan 17 12:14:18.364090 kernel: NET: Registered PF_ALG protocol family
Jan 17 12:14:19.395054 ntpd[2038]: Listen normally on 6 cilium_host 192.168.0.169:123
Jan 17 12:14:19.401928 ntpd[2038]: 17 Jan 12:14:19 ntpd[2038]: Listen normally on 6 cilium_host 192.168.0.169:123
Jan 17 12:14:19.401928 ntpd[2038]: 17 Jan 12:14:19 ntpd[2038]: Listen normally on 7 cilium_net [fe80::dcaa:aeff:feb5:7078%4]:123
Jan 17 12:14:19.401928 ntpd[2038]: 17 Jan 12:14:19 ntpd[2038]: Listen normally on 8 cilium_host [fe80::b441:deff:fe81:f248%5]:123
Jan 17 12:14:19.401928 ntpd[2038]: 17 Jan 12:14:19 ntpd[2038]: Listen normally on 9 cilium_vxlan [fe80::a481:18ff:fe3a:883c%6]:123
Jan 17 12:14:19.395158 ntpd[2038]: Listen normally on 7 cilium_net [fe80::dcaa:aeff:feb5:7078%4]:123
Jan 17 12:14:19.395218 ntpd[2038]: Listen normally on 8 cilium_host [fe80::b441:deff:fe81:f248%5]:123
Jan 17 12:14:19.395676 ntpd[2038]: Listen normally on 9 cilium_vxlan [fe80::a481:18ff:fe3a:883c%6]:123
Jan 17 12:14:19.893439 (udev-worker)[4605]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:14:19.898073 systemd-networkd[1644]: lxc_health: Link UP
Jan 17 12:14:19.905078 (udev-worker)[4847]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:14:19.914289 systemd-networkd[1644]: lxc_health: Gained carrier
Jan 17 12:14:20.175077 systemd-networkd[1644]: lxc4b1b86ab089a: Link UP
Jan 17 12:14:20.180021 kernel: eth0: renamed from tmp2410c
Jan 17 12:14:20.184152 systemd-networkd[1644]: lxc4b1b86ab089a: Gained carrier
Jan 17 12:14:20.195126 (udev-worker)[4857]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:14:20.239038 systemd-networkd[1644]: lxcc812992af1cd: Link UP
Jan 17 12:14:20.258196 kernel: eth0: renamed from tmp41b39
Jan 17 12:14:20.267424 (udev-worker)[4867]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:14:20.293343 systemd-networkd[1644]: lxcc812992af1cd: Gained carrier
Jan 17 12:14:20.793114 kubelet[3681]: I0117 12:14:20.792907    3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5bg4m" podStartSLOduration=15.216188995 podStartE2EDuration="36.775203874s" podCreationTimestamp="2025-01-17 12:13:44 +0000 UTC" firstStartedPulling="2025-01-17 12:13:44.852277229 +0000 UTC m=+12.856818443" lastFinishedPulling="2025-01-17 12:14:06.411292108 +0000 UTC m=+34.415833322" observedRunningTime="2025-01-17 12:14:11.969462651 +0000 UTC m=+39.974003887" watchObservedRunningTime="2025-01-17 12:14:20.775203874 +0000 UTC m=+48.779745110"
Jan 17 12:14:21.197162 systemd-networkd[1644]: lxc_health: Gained IPv6LL
Jan 17 12:14:21.647251 systemd-networkd[1644]: lxc4b1b86ab089a: Gained IPv6LL
Jan 17 12:14:22.093218 systemd-networkd[1644]: lxcc812992af1cd: Gained IPv6LL
Jan 17 12:14:23.346418 systemd[1]: Started sshd@8-172.31.29.55:22-139.178.89.65:43076.service - OpenSSH per-connection server daemon (139.178.89.65:43076).
Jan 17 12:14:23.607502 sshd[4887]: Accepted publickey for core from 139.178.89.65 port 43076 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:23.608698 sshd[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:23.638661 systemd-logind[2055]: New session 9 of user core.
Jan 17 12:14:23.645387 systemd[1]: Started session-9.scope - Session 9 of User core.
Jan 17 12:14:24.011076 sshd[4887]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:24.019624 systemd[1]: sshd@8-172.31.29.55:22-139.178.89.65:43076.service: Deactivated successfully.
Jan 17 12:14:24.030834 systemd-logind[2055]: Session 9 logged out. Waiting for processes to exit.
Jan 17 12:14:24.035531 systemd[1]: session-9.scope: Deactivated successfully.
Jan 17 12:14:24.039663 systemd-logind[2055]: Removed session 9.
Jan 17 12:14:24.395115 ntpd[2038]: Listen normally on 10 lxc_health [fe80::d051:9bff:fe7a:912e%8]:123
Jan 17 12:14:24.395749 ntpd[2038]: 17 Jan 12:14:24 ntpd[2038]: Listen normally on 10 lxc_health [fe80::d051:9bff:fe7a:912e%8]:123
Jan 17 12:14:24.395749 ntpd[2038]: 17 Jan 12:14:24 ntpd[2038]: Listen normally on 11 lxc4b1b86ab089a [fe80::c485:cfff:fec3:a34f%10]:123
Jan 17 12:14:24.395749 ntpd[2038]: 17 Jan 12:14:24 ntpd[2038]: Listen normally on 12 lxcc812992af1cd [fe80::2400:29ff:fe95:6b83%12]:123
Jan 17 12:14:24.395207 ntpd[2038]: Listen normally on 11 lxc4b1b86ab089a [fe80::c485:cfff:fec3:a34f%10]:123
Jan 17 12:14:24.395251 ntpd[2038]: Listen normally on 12 lxcc812992af1cd [fe80::2400:29ff:fe95:6b83%12]:123
Jan 17 12:14:27.974133 containerd[2081]: time="2025-01-17T12:14:27.970383652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 17 12:14:27.974133 containerd[2081]: time="2025-01-17T12:14:27.970509721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 17 12:14:27.974133 containerd[2081]: time="2025-01-17T12:14:27.970588001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:14:27.974133 containerd[2081]: time="2025-01-17T12:14:27.970780170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:14:28.057808 containerd[2081]: time="2025-01-17T12:14:28.057690551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 17 12:14:28.058444 containerd[2081]: time="2025-01-17T12:14:28.058168879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 17 12:14:28.058444 containerd[2081]: time="2025-01-17T12:14:28.058234823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:14:28.058947 containerd[2081]: time="2025-01-17T12:14:28.058807438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:14:28.349343 containerd[2081]: time="2025-01-17T12:14:28.349293765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r6t4c,Uid:fbce4973-bdcf-4418-8a4e-7c463617c72e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2410ca14fe3d2b724159133091e9b327a106184c39239763b95332383d588701\""
Jan 17 12:14:28.361221 containerd[2081]: time="2025-01-17T12:14:28.361110459Z" level=info msg="CreateContainer within sandbox \"2410ca14fe3d2b724159133091e9b327a106184c39239763b95332383d588701\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 17 12:14:28.363413 containerd[2081]: time="2025-01-17T12:14:28.361266597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rrmq2,Uid:3a025f83-6d77-4151-9a5e-4e06abfc4c77,Namespace:kube-system,Attempt:0,} returns sandbox id \"41b39a877882b64a1f0c603b05a663114af9e8a61fee18a1932c4fce9ff4184b\""
Jan 17 12:14:28.370086 containerd[2081]: time="2025-01-17T12:14:28.369567563Z" level=info msg="CreateContainer within sandbox \"41b39a877882b64a1f0c603b05a663114af9e8a61fee18a1932c4fce9ff4184b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 17 12:14:28.417393 containerd[2081]: time="2025-01-17T12:14:28.417345041Z" level=info msg="CreateContainer within sandbox \"41b39a877882b64a1f0c603b05a663114af9e8a61fee18a1932c4fce9ff4184b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e8ed56a3d07b237e16afc906262a68145d7393f6fbeccb6db98f3646f8812e7\""
Jan 17 12:14:28.423380 containerd[2081]: time="2025-01-17T12:14:28.418188207Z" level=info msg="StartContainer for \"2e8ed56a3d07b237e16afc906262a68145d7393f6fbeccb6db98f3646f8812e7\""
Jan 17 12:14:28.427462 containerd[2081]: time="2025-01-17T12:14:28.427413776Z" level=info msg="CreateContainer within sandbox \"2410ca14fe3d2b724159133091e9b327a106184c39239763b95332383d588701\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c44a758077e457a0053ae0b67c58d8293c81ad9fac433e615b38eaaac58c0246\""
Jan 17 12:14:28.428754 containerd[2081]: time="2025-01-17T12:14:28.428720053Z" level=info msg="StartContainer for \"c44a758077e457a0053ae0b67c58d8293c81ad9fac433e615b38eaaac58c0246\""
Jan 17 12:14:28.582092 containerd[2081]: time="2025-01-17T12:14:28.582039084Z" level=info msg="StartContainer for \"c44a758077e457a0053ae0b67c58d8293c81ad9fac433e615b38eaaac58c0246\" returns successfully"
Jan 17 12:14:28.582255 containerd[2081]: time="2025-01-17T12:14:28.582053579Z" level=info msg="StartContainer for \"2e8ed56a3d07b237e16afc906262a68145d7393f6fbeccb6db98f3646f8812e7\" returns successfully"
Jan 17 12:14:29.065138 systemd[1]: Started sshd@9-172.31.29.55:22-139.178.89.65:43080.service - OpenSSH per-connection server daemon (139.178.89.65:43080).
Jan 17 12:14:29.181138 kubelet[3681]: I0117 12:14:29.181085    3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-r6t4c" podStartSLOduration=45.181027453 podStartE2EDuration="45.181027453s" podCreationTimestamp="2025-01-17 12:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:14:29.104499411 +0000 UTC m=+57.109040645" watchObservedRunningTime="2025-01-17 12:14:29.181027453 +0000 UTC m=+57.185568686"
Jan 17 12:14:29.333199 sshd[5057]: Accepted publickey for core from 139.178.89.65 port 43080 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:29.335891 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:29.347204 systemd-logind[2055]: New session 10 of user core.
Jan 17 12:14:29.350411 systemd[1]: Started session-10.scope - Session 10 of User core.
Jan 17 12:14:29.751171 sshd[5057]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:29.762461 systemd[1]: sshd@9-172.31.29.55:22-139.178.89.65:43080.service: Deactivated successfully.
Jan 17 12:14:29.776574 systemd[1]: session-10.scope: Deactivated successfully.
Jan 17 12:14:29.778017 systemd-logind[2055]: Session 10 logged out. Waiting for processes to exit.
Jan 17 12:14:29.784802 systemd-logind[2055]: Removed session 10.
Jan 17 12:14:32.016345 kubelet[3681]: I0117 12:14:32.013848    3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rrmq2" podStartSLOduration=48.013792689 podStartE2EDuration="48.013792689s" podCreationTimestamp="2025-01-17 12:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:14:29.181554552 +0000 UTC m=+57.186095786" watchObservedRunningTime="2025-01-17 12:14:32.013792689 +0000 UTC m=+60.018333929"
Jan 17 12:14:34.783279 systemd[1]: Started sshd@10-172.31.29.55:22-139.178.89.65:46058.service - OpenSSH per-connection server daemon (139.178.89.65:46058).
Jan 17 12:14:34.989587 sshd[5098]: Accepted publickey for core from 139.178.89.65 port 46058 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:34.990873 sshd[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:34.997889 systemd-logind[2055]: New session 11 of user core.
Jan 17 12:14:35.003810 systemd[1]: Started session-11.scope - Session 11 of User core.
Jan 17 12:14:35.295249 sshd[5098]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:35.307158 systemd-logind[2055]: Session 11 logged out. Waiting for processes to exit.
Jan 17 12:14:35.307535 systemd[1]: sshd@10-172.31.29.55:22-139.178.89.65:46058.service: Deactivated successfully.
Jan 17 12:14:35.314440 systemd[1]: session-11.scope: Deactivated successfully.
Jan 17 12:14:35.316727 systemd-logind[2055]: Removed session 11.
Jan 17 12:14:40.337887 systemd[1]: Started sshd@11-172.31.29.55:22-139.178.89.65:46072.service - OpenSSH per-connection server daemon (139.178.89.65:46072).
Jan 17 12:14:40.500683 sshd[5113]: Accepted publickey for core from 139.178.89.65 port 46072 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:40.501673 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:40.516778 systemd-logind[2055]: New session 12 of user core.
Jan 17 12:14:40.520353 systemd[1]: Started session-12.scope - Session 12 of User core.
Jan 17 12:14:40.824562 sshd[5113]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:40.836654 systemd[1]: sshd@11-172.31.29.55:22-139.178.89.65:46072.service: Deactivated successfully.
Jan 17 12:14:40.844341 systemd-logind[2055]: Session 12 logged out. Waiting for processes to exit.
Jan 17 12:14:40.845454 systemd[1]: session-12.scope: Deactivated successfully.
Jan 17 12:14:40.856721 systemd[1]: Started sshd@12-172.31.29.55:22-139.178.89.65:46078.service - OpenSSH per-connection server daemon (139.178.89.65:46078).
Jan 17 12:14:40.858127 systemd-logind[2055]: Removed session 12.
Jan 17 12:14:41.038138 sshd[5128]: Accepted publickey for core from 139.178.89.65 port 46078 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:41.040164 sshd[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:41.066179 systemd-logind[2055]: New session 13 of user core.
Jan 17 12:14:41.073546 systemd[1]: Started session-13.scope - Session 13 of User core.
Jan 17 12:14:41.555690 sshd[5128]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:41.576066 systemd[1]: sshd@12-172.31.29.55:22-139.178.89.65:46078.service: Deactivated successfully.
Jan 17 12:14:41.612088 systemd-logind[2055]: Session 13 logged out. Waiting for processes to exit.
Jan 17 12:14:41.626704 systemd[1]: Started sshd@13-172.31.29.55:22-139.178.89.65:41534.service - OpenSSH per-connection server daemon (139.178.89.65:41534).
Jan 17 12:14:41.627612 systemd[1]: session-13.scope: Deactivated successfully.
Jan 17 12:14:41.640725 systemd-logind[2055]: Removed session 13.
Jan 17 12:14:41.828354 sshd[5141]: Accepted publickey for core from 139.178.89.65 port 41534 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:41.830145 sshd[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:41.838576 systemd-logind[2055]: New session 14 of user core.
Jan 17 12:14:41.844086 systemd[1]: Started session-14.scope - Session 14 of User core.
Jan 17 12:14:42.221752 sshd[5141]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:42.232114 systemd[1]: sshd@13-172.31.29.55:22-139.178.89.65:41534.service: Deactivated successfully.
Jan 17 12:14:42.249718 systemd[1]: session-14.scope: Deactivated successfully.
Jan 17 12:14:42.251046 systemd-logind[2055]: Session 14 logged out. Waiting for processes to exit.
Jan 17 12:14:42.253220 systemd-logind[2055]: Removed session 14.
Jan 17 12:14:47.252464 systemd[1]: Started sshd@14-172.31.29.55:22-139.178.89.65:41548.service - OpenSSH per-connection server daemon (139.178.89.65:41548).
Jan 17 12:14:47.476036 sshd[5158]: Accepted publickey for core from 139.178.89.65 port 41548 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:47.478872 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:47.502066 systemd-logind[2055]: New session 15 of user core.
Jan 17 12:14:47.513933 systemd[1]: Started session-15.scope - Session 15 of User core.
Jan 17 12:14:47.784681 sshd[5158]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:47.789103 systemd[1]: sshd@14-172.31.29.55:22-139.178.89.65:41548.service: Deactivated successfully.
Jan 17 12:14:47.795025 systemd-logind[2055]: Session 15 logged out. Waiting for processes to exit.
Jan 17 12:14:47.795575 systemd[1]: session-15.scope: Deactivated successfully.
Jan 17 12:14:47.797492 systemd-logind[2055]: Removed session 15.
Jan 17 12:14:52.821768 systemd[1]: Started sshd@15-172.31.29.55:22-139.178.89.65:34144.service - OpenSSH per-connection server daemon (139.178.89.65:34144).
Jan 17 12:14:53.024032 sshd[5172]: Accepted publickey for core from 139.178.89.65 port 34144 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:53.027542 sshd[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:53.034417 systemd-logind[2055]: New session 16 of user core.
Jan 17 12:14:53.042405 systemd[1]: Started session-16.scope - Session 16 of User core.
Jan 17 12:14:53.450151 sshd[5172]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:53.457147 systemd[1]: sshd@15-172.31.29.55:22-139.178.89.65:34144.service: Deactivated successfully.
Jan 17 12:14:53.463484 systemd[1]: session-16.scope: Deactivated successfully.
Jan 17 12:14:53.466546 systemd-logind[2055]: Session 16 logged out. Waiting for processes to exit.
Jan 17 12:14:53.468620 systemd-logind[2055]: Removed session 16.
Jan 17 12:14:53.481383 systemd[1]: Started sshd@16-172.31.29.55:22-139.178.89.65:34158.service - OpenSSH per-connection server daemon (139.178.89.65:34158).
Jan 17 12:14:53.668088 sshd[5186]: Accepted publickey for core from 139.178.89.65 port 34158 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:53.674256 sshd[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:53.689422 systemd-logind[2055]: New session 17 of user core.
Jan 17 12:14:53.696587 systemd[1]: Started session-17.scope - Session 17 of User core.
Jan 17 12:14:57.258426 sshd[5186]: pam_unix(sshd:session): session closed for user core
Jan 17 12:14:57.269074 systemd[1]: sshd@16-172.31.29.55:22-139.178.89.65:34158.service: Deactivated successfully.
Jan 17 12:14:57.276345 systemd[1]: session-17.scope: Deactivated successfully.
Jan 17 12:14:57.277576 systemd-logind[2055]: Session 17 logged out. Waiting for processes to exit.
Jan 17 12:14:57.286607 systemd[1]: Started sshd@17-172.31.29.55:22-139.178.89.65:34162.service - OpenSSH per-connection server daemon (139.178.89.65:34162).
Jan 17 12:14:57.287853 systemd-logind[2055]: Removed session 17.
Jan 17 12:14:57.475106 sshd[5198]: Accepted publickey for core from 139.178.89.65 port 34162 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:14:57.485635 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:14:57.497891 systemd-logind[2055]: New session 18 of user core.
Jan 17 12:14:57.503084 systemd[1]: Started session-18.scope - Session 18 of User core.
Jan 17 12:15:00.324397 sshd[5198]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:00.357294 systemd[1]: sshd@17-172.31.29.55:22-139.178.89.65:34162.service: Deactivated successfully.
Jan 17 12:15:00.376340 systemd-logind[2055]: Session 18 logged out. Waiting for processes to exit.
Jan 17 12:15:00.383043 systemd[1]: Started sshd@18-172.31.29.55:22-139.178.89.65:34164.service - OpenSSH per-connection server daemon (139.178.89.65:34164).
Jan 17 12:15:00.384124 systemd[1]: session-18.scope: Deactivated successfully.
Jan 17 12:15:00.396500 systemd-logind[2055]: Removed session 18.
Jan 17 12:15:00.595193 sshd[5221]: Accepted publickey for core from 139.178.89.65 port 34164 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:15:00.601231 sshd[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:15:00.624846 systemd-logind[2055]: New session 19 of user core.
Jan 17 12:15:00.631401 systemd[1]: Started session-19.scope - Session 19 of User core.
Jan 17 12:15:02.930948 systemd-journald[1567]: Under memory pressure, flushing caches.
Jan 17 12:15:02.925231 systemd-resolved[1968]: Under memory pressure, flushing caches.
Jan 17 12:15:02.925271 systemd-resolved[1968]: Flushed all caches.
Jan 17 12:15:03.428783 sshd[5221]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:03.490903 systemd[1]: Started sshd@19-172.31.29.55:22-139.178.89.65:43568.service - OpenSSH per-connection server daemon (139.178.89.65:43568).
Jan 17 12:15:03.542841 systemd[1]: sshd@18-172.31.29.55:22-139.178.89.65:34164.service: Deactivated successfully.
Jan 17 12:15:03.546304 systemd-logind[2055]: Session 19 logged out. Waiting for processes to exit.
Jan 17 12:15:03.558200 systemd[1]: session-19.scope: Deactivated successfully.
Jan 17 12:15:03.566053 systemd-logind[2055]: Removed session 19.
Jan 17 12:15:03.764714 sshd[5231]: Accepted publickey for core from 139.178.89.65 port 43568 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:15:03.769585 sshd[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:15:03.795865 systemd-logind[2055]: New session 20 of user core.
Jan 17 12:15:03.821733 systemd[1]: Started session-20.scope - Session 20 of User core.
Jan 17 12:15:04.218146 sshd[5231]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:04.231200 systemd[1]: sshd@19-172.31.29.55:22-139.178.89.65:43568.service: Deactivated successfully.
Jan 17 12:15:04.242999 systemd[1]: session-20.scope: Deactivated successfully.
Jan 17 12:15:04.245931 systemd-logind[2055]: Session 20 logged out. Waiting for processes to exit.
Jan 17 12:15:04.257916 systemd-logind[2055]: Removed session 20.
Jan 17 12:15:09.250139 systemd[1]: Started sshd@20-172.31.29.55:22-139.178.89.65:43582.service - OpenSSH per-connection server daemon (139.178.89.65:43582).
Jan 17 12:15:09.430060 sshd[5251]: Accepted publickey for core from 139.178.89.65 port 43582 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:15:09.433675 sshd[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:15:09.448268 systemd-logind[2055]: New session 21 of user core.
Jan 17 12:15:09.456434 systemd[1]: Started session-21.scope - Session 21 of User core.
Jan 17 12:15:09.793922 sshd[5251]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:09.799966 systemd-logind[2055]: Session 21 logged out. Waiting for processes to exit.
Jan 17 12:15:09.800614 systemd[1]: sshd@20-172.31.29.55:22-139.178.89.65:43582.service: Deactivated successfully.
Jan 17 12:15:09.811577 systemd[1]: session-21.scope: Deactivated successfully.
Jan 17 12:15:09.812760 systemd-logind[2055]: Removed session 21.
Jan 17 12:15:14.838809 systemd[1]: Started sshd@21-172.31.29.55:22-139.178.89.65:42660.service - OpenSSH per-connection server daemon (139.178.89.65:42660).
Jan 17 12:15:15.052020 sshd[5264]: Accepted publickey for core from 139.178.89.65 port 42660 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:15:15.053487 sshd[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:15:15.066681 systemd-logind[2055]: New session 22 of user core.
Jan 17 12:15:15.078095 systemd[1]: Started session-22.scope - Session 22 of User core.
Jan 17 12:15:15.333798 sshd[5264]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:15.347201 systemd[1]: sshd@21-172.31.29.55:22-139.178.89.65:42660.service: Deactivated successfully.
Jan 17 12:15:15.354436 systemd[1]: session-22.scope: Deactivated successfully.
Jan 17 12:15:15.354912 systemd-logind[2055]: Session 22 logged out. Waiting for processes to exit.
Jan 17 12:15:15.358192 systemd-logind[2055]: Removed session 22.
Jan 17 12:15:20.364480 systemd[1]: Started sshd@22-172.31.29.55:22-139.178.89.65:42666.service - OpenSSH per-connection server daemon (139.178.89.65:42666).
Jan 17 12:15:20.549016 sshd[5281]: Accepted publickey for core from 139.178.89.65 port 42666 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:15:20.551399 sshd[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:15:20.558332 systemd-logind[2055]: New session 23 of user core.
Jan 17 12:15:20.567403 systemd[1]: Started session-23.scope - Session 23 of User core.
Jan 17 12:15:20.798367 sshd[5281]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:20.804182 systemd-logind[2055]: Session 23 logged out. Waiting for processes to exit.
Jan 17 12:15:20.806327 systemd[1]: sshd@22-172.31.29.55:22-139.178.89.65:42666.service: Deactivated successfully.
Jan 17 12:15:20.814404 systemd[1]: session-23.scope: Deactivated successfully.
Jan 17 12:15:20.821289 systemd-logind[2055]: Removed session 23.
Jan 17 12:15:20.827461 systemd[1]: Started sshd@23-172.31.29.55:22-139.178.89.65:42672.service - OpenSSH per-connection server daemon (139.178.89.65:42672).
Jan 17 12:15:21.035543 sshd[5295]: Accepted publickey for core from 139.178.89.65 port 42672 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:15:21.037332 sshd[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:15:21.046173 systemd-logind[2055]: New session 24 of user core.
Jan 17 12:15:21.053468 systemd[1]: Started session-24.scope - Session 24 of User core.
Jan 17 12:15:23.814786 containerd[2081]: time="2025-01-17T12:15:23.814705694Z" level=info msg="StopContainer for \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\" with timeout 30 (s)"
Jan 17 12:15:23.820725 containerd[2081]: time="2025-01-17T12:15:23.818964079Z" level=info msg="Stop container \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\" with signal terminated"
Jan 17 12:15:23.888648 systemd[1]: run-containerd-runc-k8s.io-b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4-runc.Pg3gDG.mount: Deactivated successfully.
Jan 17 12:15:23.917833 containerd[2081]: time="2025-01-17T12:15:23.917767476Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 17 12:15:23.930218 containerd[2081]: time="2025-01-17T12:15:23.930162861Z" level=info msg="StopContainer for \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\" with timeout 2 (s)"
Jan 17 12:15:23.930789 containerd[2081]: time="2025-01-17T12:15:23.930739185Z" level=info msg="Stop container \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\" with signal terminated"
Jan 17 12:15:23.946135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405-rootfs.mount: Deactivated successfully.
Jan 17 12:15:23.953385 systemd-networkd[1644]: lxc_health: Link DOWN
Jan 17 12:15:23.953393 systemd-networkd[1644]: lxc_health: Lost carrier
Jan 17 12:15:23.993354 containerd[2081]: time="2025-01-17T12:15:23.993269567Z" level=info msg="shim disconnected" id=c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405 namespace=k8s.io
Jan 17 12:15:23.993354 containerd[2081]: time="2025-01-17T12:15:23.993358357Z" level=warning msg="cleaning up after shim disconnected" id=c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405 namespace=k8s.io
Jan 17 12:15:23.994535 containerd[2081]: time="2025-01-17T12:15:23.993373615Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:15:24.036588 containerd[2081]: time="2025-01-17T12:15:24.035547100Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:15:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 17 12:15:24.045307 containerd[2081]: time="2025-01-17T12:15:24.043378348Z" level=info msg="StopContainer for \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\" returns successfully"
Jan 17 12:15:24.043893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4-rootfs.mount: Deactivated successfully.
Jan 17 12:15:24.055364 containerd[2081]: time="2025-01-17T12:15:24.047784957Z" level=info msg="StopPodSandbox for \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\""
Jan 17 12:15:24.055364 containerd[2081]: time="2025-01-17T12:15:24.047851349Z" level=info msg="Container to stop \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 17 12:15:24.057348 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6-shm.mount: Deactivated successfully.
Jan 17 12:15:24.062666 containerd[2081]: time="2025-01-17T12:15:24.062554268Z" level=info msg="shim disconnected" id=b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4 namespace=k8s.io
Jan 17 12:15:24.063238 containerd[2081]: time="2025-01-17T12:15:24.063051729Z" level=warning msg="cleaning up after shim disconnected" id=b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4 namespace=k8s.io
Jan 17 12:15:24.063238 containerd[2081]: time="2025-01-17T12:15:24.063083505Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:15:24.098823 containerd[2081]: time="2025-01-17T12:15:24.098753107Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:15:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 17 12:15:24.103744 containerd[2081]: time="2025-01-17T12:15:24.103485099Z" level=info msg="StopContainer for \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\" returns successfully"
Jan 17 12:15:24.104171 containerd[2081]: time="2025-01-17T12:15:24.104139504Z" level=info msg="StopPodSandbox for \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\""
Jan 17 12:15:24.104275 containerd[2081]: time="2025-01-17T12:15:24.104185728Z" level=info msg="Container to stop \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 17 12:15:24.104275 containerd[2081]: time="2025-01-17T12:15:24.104204777Z" level=info msg="Container to stop \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 17 12:15:24.104275 containerd[2081]: time="2025-01-17T12:15:24.104219614Z" level=info msg="Container to stop \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 17 12:15:24.104275 containerd[2081]: time="2025-01-17T12:15:24.104233661Z" level=info msg="Container to stop \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 17 12:15:24.104275 containerd[2081]: time="2025-01-17T12:15:24.104248608Z" level=info msg="Container to stop \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 17 12:15:24.133436 containerd[2081]: time="2025-01-17T12:15:24.133142797Z" level=info msg="shim disconnected" id=0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6 namespace=k8s.io
Jan 17 12:15:24.133436 containerd[2081]: time="2025-01-17T12:15:24.133229034Z" level=warning msg="cleaning up after shim disconnected" id=0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6 namespace=k8s.io
Jan 17 12:15:24.133436 containerd[2081]: time="2025-01-17T12:15:24.133245513Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:15:24.182866 containerd[2081]: time="2025-01-17T12:15:24.182311346Z" level=info msg="shim disconnected" id=4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e namespace=k8s.io
Jan 17 12:15:24.182866 containerd[2081]: time="2025-01-17T12:15:24.182427731Z" level=warning msg="cleaning up after shim disconnected" id=4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e namespace=k8s.io
Jan 17 12:15:24.182866 containerd[2081]: time="2025-01-17T12:15:24.182569903Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:15:24.185949 containerd[2081]: time="2025-01-17T12:15:24.185569662Z" level=info msg="TearDown network for sandbox \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\" successfully"
Jan 17 12:15:24.185949 containerd[2081]: time="2025-01-17T12:15:24.185685860Z" level=info msg="StopPodSandbox for \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\" returns successfully"
Jan 17 12:15:24.220604 containerd[2081]: time="2025-01-17T12:15:24.220451735Z" level=info msg="TearDown network for sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" successfully"
Jan 17 12:15:24.220604 containerd[2081]: time="2025-01-17T12:15:24.220491222Z" level=info msg="StopPodSandbox for \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" returns successfully"
Jan 17 12:15:24.246877 kubelet[3681]: I0117 12:15:24.246845    3681 scope.go:117] "RemoveContainer" containerID="b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4"
Jan 17 12:15:24.250003 containerd[2081]: time="2025-01-17T12:15:24.249325682Z" level=info msg="RemoveContainer for \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\""
Jan 17 12:15:24.260794 containerd[2081]: time="2025-01-17T12:15:24.260732739Z" level=info msg="RemoveContainer for \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\" returns successfully"
Jan 17 12:15:24.264585 kubelet[3681]: I0117 12:15:24.264551    3681 scope.go:117] "RemoveContainer" containerID="4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0"
Jan 17 12:15:24.279011 containerd[2081]: time="2025-01-17T12:15:24.277259846Z" level=info msg="RemoveContainer for \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\""
Jan 17 12:15:24.305469 containerd[2081]: time="2025-01-17T12:15:24.305418730Z" level=info msg="RemoveContainer for \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\" returns successfully"
Jan 17 12:15:24.306124 kubelet[3681]: I0117 12:15:24.306060    3681 scope.go:117] "RemoveContainer" containerID="7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377"
Jan 17 12:15:24.312728 containerd[2081]: time="2025-01-17T12:15:24.312680223Z" level=info msg="RemoveContainer for \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\""
Jan 17 12:15:24.323007 containerd[2081]: time="2025-01-17T12:15:24.322100151Z" level=info msg="RemoveContainer for \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\" returns successfully"
Jan 17 12:15:24.323159 kubelet[3681]: I0117 12:15:24.322509    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg6zm\" (UniqueName: \"kubernetes.io/projected/46f9bfec-05fe-43d0-97e4-009de82ae92e-kube-api-access-kg6zm\") pod \"46f9bfec-05fe-43d0-97e4-009de82ae92e\" (UID: \"46f9bfec-05fe-43d0-97e4-009de82ae92e\") "
Jan 17 12:15:24.323159 kubelet[3681]: I0117 12:15:24.322560    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-config-path\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323159 kubelet[3681]: I0117 12:15:24.322593    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-xtables-lock\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323159 kubelet[3681]: I0117 12:15:24.322618    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-bpf-maps\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323159 kubelet[3681]: I0117 12:15:24.322641    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-hostproc\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323159 kubelet[3681]: I0117 12:15:24.322673    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdtsr\" (UniqueName: \"kubernetes.io/projected/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-kube-api-access-mdtsr\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323512 kubelet[3681]: I0117 12:15:24.322706    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-hubble-tls\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323512 kubelet[3681]: I0117 12:15:24.322730    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-etc-cni-netd\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323512 kubelet[3681]: I0117 12:15:24.322761    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-clustermesh-secrets\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323512 kubelet[3681]: I0117 12:15:24.322784    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cni-path\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323512 kubelet[3681]: I0117 12:15:24.322807    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-cgroup\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.323512 kubelet[3681]: I0117 12:15:24.322847    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46f9bfec-05fe-43d0-97e4-009de82ae92e-cilium-config-path\") pod \"46f9bfec-05fe-43d0-97e4-009de82ae92e\" (UID: \"46f9bfec-05fe-43d0-97e4-009de82ae92e\") "
Jan 17 12:15:24.326514 kubelet[3681]: I0117 12:15:24.322872    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-lib-modules\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.326514 kubelet[3681]: I0117 12:15:24.322897    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-host-proc-sys-kernel\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.326514 kubelet[3681]: I0117 12:15:24.322922    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-run\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.326514 kubelet[3681]: I0117 12:15:24.322951    3681 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-host-proc-sys-net\") pod \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\" (UID: \"70e8a5de-6357-4fbc-830e-df6a4e2d80ba\") "
Jan 17 12:15:24.326514 kubelet[3681]: I0117 12:15:24.323051    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.326514 kubelet[3681]: I0117 12:15:24.326428    3681 scope.go:117] "RemoveContainer" containerID="dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724"
Jan 17 12:15:24.330005 kubelet[3681]: I0117 12:15:24.327830    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.330005 kubelet[3681]: I0117 12:15:24.328169    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.330005 kubelet[3681]: I0117 12:15:24.328384    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.330005 kubelet[3681]: I0117 12:15:24.328412    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-hostproc" (OuterVolumeSpecName: "hostproc") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.330005 kubelet[3681]: I0117 12:15:24.328954    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cni-path" (OuterVolumeSpecName: "cni-path") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.330344 kubelet[3681]: I0117 12:15:24.329005    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.330344 kubelet[3681]: I0117 12:15:24.329237    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.330344 kubelet[3681]: I0117 12:15:24.329264    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.330344 kubelet[3681]: I0117 12:15:24.329295    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 17 12:15:24.382134 containerd[2081]: time="2025-01-17T12:15:24.381288494Z" level=info msg="RemoveContainer for \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\""
Jan 17 12:15:24.400257 containerd[2081]: time="2025-01-17T12:15:24.400123584Z" level=info msg="RemoveContainer for \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\" returns successfully"
Jan 17 12:15:24.404008 kubelet[3681]: I0117 12:15:24.402933    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f9bfec-05fe-43d0-97e4-009de82ae92e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "46f9bfec-05fe-43d0-97e4-009de82ae92e" (UID: "46f9bfec-05fe-43d0-97e4-009de82ae92e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jan 17 12:15:24.407602 kubelet[3681]: I0117 12:15:24.404340    3681 scope.go:117] "RemoveContainer" containerID="f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d"
Jan 17 12:15:24.410008 kubelet[3681]: I0117 12:15:24.408657    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jan 17 12:15:24.421586 containerd[2081]: time="2025-01-17T12:15:24.421168207Z" level=info msg="RemoveContainer for \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\""
Jan 17 12:15:24.423150 kubelet[3681]: I0117 12:15:24.422921    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f9bfec-05fe-43d0-97e4-009de82ae92e-kube-api-access-kg6zm" (OuterVolumeSpecName: "kube-api-access-kg6zm") pod "46f9bfec-05fe-43d0-97e4-009de82ae92e" (UID: "46f9bfec-05fe-43d0-97e4-009de82ae92e"). InnerVolumeSpecName "kube-api-access-kg6zm". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 17 12:15:24.427120 kubelet[3681]: I0117 12:15:24.427076    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-kube-api-access-mdtsr" (OuterVolumeSpecName: "kube-api-access-mdtsr") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "kube-api-access-mdtsr". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 17 12:15:24.430501 kubelet[3681]: I0117 12:15:24.430060    3681 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kg6zm\" (UniqueName: \"kubernetes.io/projected/46f9bfec-05fe-43d0-97e4-009de82ae92e-kube-api-access-kg6zm\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.430816 kubelet[3681]: I0117 12:15:24.430517    3681 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-config-path\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439002 kubelet[3681]: I0117 12:15:24.431075    3681 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-xtables-lock\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439002 kubelet[3681]: I0117 12:15:24.432581    3681 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-bpf-maps\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439002 kubelet[3681]: I0117 12:15:24.432741    3681 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-hostproc\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439002 kubelet[3681]: I0117 12:15:24.435428    3681 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mdtsr\" (UniqueName: \"kubernetes.io/projected/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-kube-api-access-mdtsr\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439002 kubelet[3681]: I0117 12:15:24.435518    3681 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-etc-cni-netd\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439002 kubelet[3681]: I0117 12:15:24.436333    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jan 17 12:15:24.439002 kubelet[3681]: I0117 12:15:24.436406    3681 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-cgroup\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439299 kubelet[3681]: I0117 12:15:24.436428    3681 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cni-path\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439299 kubelet[3681]: I0117 12:15:24.436445    3681 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46f9bfec-05fe-43d0-97e4-009de82ae92e-cilium-config-path\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439299 kubelet[3681]: I0117 12:15:24.436751    3681 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-lib-modules\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439299 kubelet[3681]: I0117 12:15:24.436804    3681 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-host-proc-sys-net\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439299 kubelet[3681]: I0117 12:15:24.436835    3681 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-host-proc-sys-kernel\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439299 kubelet[3681]: I0117 12:15:24.436852    3681 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-cilium-run\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.439738 kubelet[3681]: I0117 12:15:24.439703    3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70e8a5de-6357-4fbc-830e-df6a4e2d80ba" (UID: "70e8a5de-6357-4fbc-830e-df6a4e2d80ba"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 17 12:15:24.441780 containerd[2081]: time="2025-01-17T12:15:24.441446508Z" level=info msg="RemoveContainer for \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\" returns successfully"
Jan 17 12:15:24.441898 kubelet[3681]: I0117 12:15:24.441804    3681 scope.go:117] "RemoveContainer" containerID="b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4"
Jan 17 12:15:24.455270 containerd[2081]: time="2025-01-17T12:15:24.442124592Z" level=error msg="ContainerStatus for \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\": not found"
Jan 17 12:15:24.458195 kubelet[3681]: E0117 12:15:24.458144    3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\": not found" containerID="b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4"
Jan 17 12:15:24.464103 kubelet[3681]: I0117 12:15:24.464025    3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4"} err="failed to get container status \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b705a7b300fbb7bab19892dda926bff55cfb7a6c2c2c7e7c28212079798a01d4\": not found"
Jan 17 12:15:24.464471 kubelet[3681]: I0117 12:15:24.464288    3681 scope.go:117] "RemoveContainer" containerID="4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0"
Jan 17 12:15:24.465351 containerd[2081]: time="2025-01-17T12:15:24.464870742Z" level=error msg="ContainerStatus for \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\": not found"
Jan 17 12:15:24.465550 kubelet[3681]: E0117 12:15:24.465287    3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\": not found" containerID="4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0"
Jan 17 12:15:24.465550 kubelet[3681]: I0117 12:15:24.465326    3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0"} err="failed to get container status \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a9839270e307619ed523a51818f21864d47391744c72140eb4fec021b0ef5c0\": not found"
Jan 17 12:15:24.465879 kubelet[3681]: I0117 12:15:24.465767    3681 scope.go:117] "RemoveContainer" containerID="7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377"
Jan 17 12:15:24.467328 containerd[2081]: time="2025-01-17T12:15:24.467286483Z" level=error msg="ContainerStatus for \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\": not found"
Jan 17 12:15:24.467702 kubelet[3681]: E0117 12:15:24.467674    3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\": not found" containerID="7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377"
Jan 17 12:15:24.467805 kubelet[3681]: I0117 12:15:24.467732    3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377"} err="failed to get container status \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\": rpc error: code = NotFound desc = an error occurred when try to find container \"7691139a3b598fa8dd98dd85aa6e611487a1496fbbb076106a7beb9827f0d377\": not found"
Jan 17 12:15:24.467805 kubelet[3681]: I0117 12:15:24.467750    3681 scope.go:117] "RemoveContainer" containerID="dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724"
Jan 17 12:15:24.468114 containerd[2081]: time="2025-01-17T12:15:24.468076558Z" level=error msg="ContainerStatus for \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\": not found"
Jan 17 12:15:24.468311 kubelet[3681]: E0117 12:15:24.468258    3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\": not found" containerID="dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724"
Jan 17 12:15:24.468466 kubelet[3681]: I0117 12:15:24.468411    3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724"} err="failed to get container status \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc1fa559af316de7c43771523488648554b81310987c7daa319e90dded09a724\": not found"
Jan 17 12:15:24.468466 kubelet[3681]: I0117 12:15:24.468429    3681 scope.go:117] "RemoveContainer" containerID="f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d"
Jan 17 12:15:24.468779 containerd[2081]: time="2025-01-17T12:15:24.468737930Z" level=error msg="ContainerStatus for \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\": not found"
Jan 17 12:15:24.469087 kubelet[3681]: E0117 12:15:24.468947    3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\": not found" containerID="f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d"
Jan 17 12:15:24.469087 kubelet[3681]: I0117 12:15:24.468995    3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d"} err="failed to get container status \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f902da74c3f9631f43717b63ea8885f49826cd4448d94abc2d3d161f414dd86d\": not found"
Jan 17 12:15:24.469087 kubelet[3681]: I0117 12:15:24.469011    3681 scope.go:117] "RemoveContainer" containerID="c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405"
Jan 17 12:15:24.470309 containerd[2081]: time="2025-01-17T12:15:24.470257832Z" level=info msg="RemoveContainer for \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\""
Jan 17 12:15:24.475643 containerd[2081]: time="2025-01-17T12:15:24.475590203Z" level=info msg="RemoveContainer for \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\" returns successfully"
Jan 17 12:15:24.476171 kubelet[3681]: I0117 12:15:24.476141    3681 scope.go:117] "RemoveContainer" containerID="c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405"
Jan 17 12:15:24.476651 containerd[2081]: time="2025-01-17T12:15:24.476591437Z" level=error msg="ContainerStatus for \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\": not found"
Jan 17 12:15:24.476850 kubelet[3681]: E0117 12:15:24.476803    3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\": not found" containerID="c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405"
Jan 17 12:15:24.476921 kubelet[3681]: I0117 12:15:24.476852    3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405"} err="failed to get container status \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\": rpc error: code = NotFound desc = an error occurred when try to find container \"c725e3a5d5afb7dd9946cfb950482ff264c6fee2e641e8c581c9ee6f37177405\": not found"
Jan 17 12:15:24.537307 kubelet[3681]: I0117 12:15:24.537263    3681 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-hubble-tls\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.537307 kubelet[3681]: I0117 12:15:24.537306    3681 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70e8a5de-6357-4fbc-830e-df6a4e2d80ba-clustermesh-secrets\") on node \"ip-172-31-29-55\" DevicePath \"\""
Jan 17 12:15:24.873010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e-rootfs.mount: Deactivated successfully.
Jan 17 12:15:24.873260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e-shm.mount: Deactivated successfully.
Jan 17 12:15:24.873453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6-rootfs.mount: Deactivated successfully.
Jan 17 12:15:24.873744 systemd[1]: var-lib-kubelet-pods-70e8a5de\x2d6357\x2d4fbc\x2d830e\x2ddf6a4e2d80ba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmdtsr.mount: Deactivated successfully.
Jan 17 12:15:24.873906 systemd[1]: var-lib-kubelet-pods-70e8a5de\x2d6357\x2d4fbc\x2d830e\x2ddf6a4e2d80ba-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Jan 17 12:15:24.874068 systemd[1]: var-lib-kubelet-pods-70e8a5de\x2d6357\x2d4fbc\x2d830e\x2ddf6a4e2d80ba-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Jan 17 12:15:24.874221 systemd[1]: var-lib-kubelet-pods-46f9bfec\x2d05fe\x2d43d0\x2d97e4\x2d009de82ae92e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkg6zm.mount: Deactivated successfully.
Jan 17 12:15:25.683287 sshd[5295]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:25.688798 systemd[1]: sshd@23-172.31.29.55:22-139.178.89.65:42672.service: Deactivated successfully.
Jan 17 12:15:25.699027 systemd[1]: session-24.scope: Deactivated successfully.
Jan 17 12:15:25.700527 systemd-logind[2055]: Session 24 logged out. Waiting for processes to exit.
Jan 17 12:15:25.715440 systemd[1]: Started sshd@24-172.31.29.55:22-139.178.89.65:53386.service - OpenSSH per-connection server daemon (139.178.89.65:53386).
Jan 17 12:15:25.717102 systemd-logind[2055]: Removed session 24.
Jan 17 12:15:25.905193 sshd[5464]: Accepted publickey for core from 139.178.89.65 port 53386 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:15:25.907102 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:15:25.917795 systemd-logind[2055]: New session 25 of user core.
Jan 17 12:15:25.926003 systemd[1]: Started session-25.scope - Session 25 of User core.
Jan 17 12:15:26.296000 kubelet[3681]: I0117 12:15:26.295592    3681 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="46f9bfec-05fe-43d0-97e4-009de82ae92e" path="/var/lib/kubelet/pods/46f9bfec-05fe-43d0-97e4-009de82ae92e/volumes"
Jan 17 12:15:26.297504 kubelet[3681]: I0117 12:15:26.297476    3681 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="70e8a5de-6357-4fbc-830e-df6a4e2d80ba" path="/var/lib/kubelet/pods/70e8a5de-6357-4fbc-830e-df6a4e2d80ba/volumes"
Jan 17 12:15:26.395190 ntpd[2038]: Deleting interface #10 lxc_health, fe80::d051:9bff:fe7a:912e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=62 secs
Jan 17 12:15:26.395689 ntpd[2038]: 17 Jan 12:15:26 ntpd[2038]: Deleting interface #10 lxc_health, fe80::d051:9bff:fe7a:912e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=62 secs
Jan 17 12:15:26.969861 sshd[5464]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:26.970585 kubelet[3681]: I0117 12:15:26.970112    3681 topology_manager.go:215] "Topology Admit Handler" podUID="375cf50b-3830-48da-8775-6d1b0b4a8fd8" podNamespace="kube-system" podName="cilium-92qh7"
Jan 17 12:15:26.988532 systemd[1]: sshd@24-172.31.29.55:22-139.178.89.65:53386.service: Deactivated successfully.
Jan 17 12:15:26.994050 systemd-logind[2055]: Session 25 logged out. Waiting for processes to exit.
Jan 17 12:15:27.010019 kubelet[3681]: E0117 12:15:27.004853    3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70e8a5de-6357-4fbc-830e-df6a4e2d80ba" containerName="apply-sysctl-overwrites"
Jan 17 12:15:27.010019 kubelet[3681]: E0117 12:15:27.005150    3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70e8a5de-6357-4fbc-830e-df6a4e2d80ba" containerName="mount-bpf-fs"
Jan 17 12:15:27.010019 kubelet[3681]: E0117 12:15:27.008471    3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70e8a5de-6357-4fbc-830e-df6a4e2d80ba" containerName="cilium-agent"
Jan 17 12:15:27.010019 kubelet[3681]: E0117 12:15:27.008508    3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46f9bfec-05fe-43d0-97e4-009de82ae92e" containerName="cilium-operator"
Jan 17 12:15:27.010019 kubelet[3681]: E0117 12:15:27.008518    3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70e8a5de-6357-4fbc-830e-df6a4e2d80ba" containerName="mount-cgroup"
Jan 17 12:15:27.010019 kubelet[3681]: E0117 12:15:27.008531    3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70e8a5de-6357-4fbc-830e-df6a4e2d80ba" containerName="clean-cilium-state"
Jan 17 12:15:27.049960 kubelet[3681]: I0117 12:15:27.017320    3681 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e8a5de-6357-4fbc-830e-df6a4e2d80ba" containerName="cilium-agent"
Jan 17 12:15:27.049960 kubelet[3681]: I0117 12:15:27.017393    3681 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f9bfec-05fe-43d0-97e4-009de82ae92e" containerName="cilium-operator"
Jan 17 12:15:27.083391 systemd[1]: session-25.scope: Deactivated successfully.
Jan 17 12:15:27.115597 systemd[1]: Started sshd@25-172.31.29.55:22-139.178.89.65:53396.service - OpenSSH per-connection server daemon (139.178.89.65:53396).
Jan 17 12:15:27.125965 systemd-logind[2055]: Removed session 25.
Jan 17 12:15:27.167176 kubelet[3681]: I0117 12:15:27.163729    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/375cf50b-3830-48da-8775-6d1b0b4a8fd8-hubble-tls\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.169433 kubelet[3681]: I0117 12:15:27.167570    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-cilium-run\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.169433 kubelet[3681]: I0117 12:15:27.168277    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-cilium-cgroup\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.170073 kubelet[3681]: I0117 12:15:27.169799    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-host-proc-sys-net\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.170073 kubelet[3681]: I0117 12:15:27.169956    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-xtables-lock\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.170073 kubelet[3681]: I0117 12:15:27.170025    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-host-proc-sys-kernel\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.170606 kubelet[3681]: I0117 12:15:27.170290    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-hostproc\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.170606 kubelet[3681]: I0117 12:15:27.170364    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-lib-modules\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.172662 kubelet[3681]: I0117 12:15:27.171399    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/375cf50b-3830-48da-8775-6d1b0b4a8fd8-cilium-config-path\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.172854 kubelet[3681]: I0117 12:15:27.172837    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/375cf50b-3830-48da-8775-6d1b0b4a8fd8-cilium-ipsec-secrets\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.174333 kubelet[3681]: I0117 12:15:27.174218    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-etc-cni-netd\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.174333 kubelet[3681]: I0117 12:15:27.174309    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/375cf50b-3830-48da-8775-6d1b0b4a8fd8-clustermesh-secrets\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.174624 kubelet[3681]: I0117 12:15:27.174410    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-bpf-maps\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.176861 kubelet[3681]: I0117 12:15:27.176054    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/375cf50b-3830-48da-8775-6d1b0b4a8fd8-cni-path\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.176861 kubelet[3681]: I0117 12:15:27.176147    3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69mjt\" (UniqueName: \"kubernetes.io/projected/375cf50b-3830-48da-8775-6d1b0b4a8fd8-kube-api-access-69mjt\") pod \"cilium-92qh7\" (UID: \"375cf50b-3830-48da-8775-6d1b0b4a8fd8\") " pod="kube-system/cilium-92qh7"
Jan 17 12:15:27.385412 sshd[5477]: Accepted publickey for core from 139.178.89.65 port 53396 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:15:27.387339 sshd[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:15:27.396445 systemd-logind[2055]: New session 26 of user core.
Jan 17 12:15:27.405854 systemd[1]: Started session-26.scope - Session 26 of User core.
Jan 17 12:15:27.450824 containerd[2081]: time="2025-01-17T12:15:27.450788899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-92qh7,Uid:375cf50b-3830-48da-8775-6d1b0b4a8fd8,Namespace:kube-system,Attempt:0,}"
Jan 17 12:15:27.504994 containerd[2081]: time="2025-01-17T12:15:27.502237803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 17 12:15:27.504994 containerd[2081]: time="2025-01-17T12:15:27.504314119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 17 12:15:27.504994 containerd[2081]: time="2025-01-17T12:15:27.504424021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:15:27.505590 containerd[2081]: time="2025-01-17T12:15:27.505519886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 17 12:15:27.536952 sshd[5477]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:27.550637 systemd[1]: sshd@25-172.31.29.55:22-139.178.89.65:53396.service: Deactivated successfully.
Jan 17 12:15:27.559665 systemd[1]: session-26.scope: Deactivated successfully.
Jan 17 12:15:27.561271 systemd-logind[2055]: Session 26 logged out. Waiting for processes to exit.
Jan 17 12:15:27.570371 systemd[1]: Started sshd@26-172.31.29.55:22-139.178.89.65:53408.service - OpenSSH per-connection server daemon (139.178.89.65:53408).
Jan 17 12:15:27.571623 systemd-logind[2055]: Removed session 26.
Jan 17 12:15:27.584921 kubelet[3681]: E0117 12:15:27.584891    3681 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jan 17 12:15:27.598912 containerd[2081]: time="2025-01-17T12:15:27.598800533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-92qh7,Uid:375cf50b-3830-48da-8775-6d1b0b4a8fd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\""
Jan 17 12:15:27.619790 containerd[2081]: time="2025-01-17T12:15:27.619532665Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Jan 17 12:15:27.643605 containerd[2081]: time="2025-01-17T12:15:27.643504323Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cfff385006675335c6f359f9054747f7d82207d05e888e786d7d6ab2546d2d3d\""
Jan 17 12:15:27.644587 containerd[2081]: time="2025-01-17T12:15:27.644554167Z" level=info msg="StartContainer for \"cfff385006675335c6f359f9054747f7d82207d05e888e786d7d6ab2546d2d3d\""
Jan 17 12:15:27.713428 containerd[2081]: time="2025-01-17T12:15:27.713383525Z" level=info msg="StartContainer for \"cfff385006675335c6f359f9054747f7d82207d05e888e786d7d6ab2546d2d3d\" returns successfully"
Jan 17 12:15:27.759971 sshd[5528]: Accepted publickey for core from 139.178.89.65 port 53408 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ
Jan 17 12:15:27.766743 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 17 12:15:27.781558 systemd-logind[2055]: New session 27 of user core.
Jan 17 12:15:27.786513 systemd[1]: Started session-27.scope - Session 27 of User core.
Jan 17 12:15:27.851615 containerd[2081]: time="2025-01-17T12:15:27.851544780Z" level=info msg="shim disconnected" id=cfff385006675335c6f359f9054747f7d82207d05e888e786d7d6ab2546d2d3d namespace=k8s.io
Jan 17 12:15:27.851615 containerd[2081]: time="2025-01-17T12:15:27.851601904Z" level=warning msg="cleaning up after shim disconnected" id=cfff385006675335c6f359f9054747f7d82207d05e888e786d7d6ab2546d2d3d namespace=k8s.io
Jan 17 12:15:27.851615 containerd[2081]: time="2025-01-17T12:15:27.851614821Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:15:28.270150 containerd[2081]: time="2025-01-17T12:15:28.270105124Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Jan 17 12:15:28.296420 containerd[2081]: time="2025-01-17T12:15:28.296345407Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e89fec2207a0edd01daf43d409b9306f5de5111689ee3b2fe4d6fd9f8883816c\""
Jan 17 12:15:28.297551 containerd[2081]: time="2025-01-17T12:15:28.297516312Z" level=info msg="StartContainer for \"e89fec2207a0edd01daf43d409b9306f5de5111689ee3b2fe4d6fd9f8883816c\""
Jan 17 12:15:28.408432 containerd[2081]: time="2025-01-17T12:15:28.408378439Z" level=info msg="StartContainer for \"e89fec2207a0edd01daf43d409b9306f5de5111689ee3b2fe4d6fd9f8883816c\" returns successfully"
Jan 17 12:15:28.443686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e89fec2207a0edd01daf43d409b9306f5de5111689ee3b2fe4d6fd9f8883816c-rootfs.mount: Deactivated successfully.
Jan 17 12:15:28.448658 containerd[2081]: time="2025-01-17T12:15:28.448595118Z" level=info msg="shim disconnected" id=e89fec2207a0edd01daf43d409b9306f5de5111689ee3b2fe4d6fd9f8883816c namespace=k8s.io
Jan 17 12:15:28.448658 containerd[2081]: time="2025-01-17T12:15:28.448653149Z" level=warning msg="cleaning up after shim disconnected" id=e89fec2207a0edd01daf43d409b9306f5de5111689ee3b2fe4d6fd9f8883816c namespace=k8s.io
Jan 17 12:15:28.448658 containerd[2081]: time="2025-01-17T12:15:28.448664572Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:15:29.322001 containerd[2081]: time="2025-01-17T12:15:29.304572092Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Jan 17 12:15:29.377356 containerd[2081]: time="2025-01-17T12:15:29.375888796Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3a08ce3a29ce3f7536ac698b6fc1ab70cd06b131799fcfe0de5ef5fba694838\""
Jan 17 12:15:29.395787 containerd[2081]: time="2025-01-17T12:15:29.395245864Z" level=info msg="StartContainer for \"a3a08ce3a29ce3f7536ac698b6fc1ab70cd06b131799fcfe0de5ef5fba694838\""
Jan 17 12:15:29.526711 containerd[2081]: time="2025-01-17T12:15:29.526665365Z" level=info msg="StartContainer for \"a3a08ce3a29ce3f7536ac698b6fc1ab70cd06b131799fcfe0de5ef5fba694838\" returns successfully"
Jan 17 12:15:29.574111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3a08ce3a29ce3f7536ac698b6fc1ab70cd06b131799fcfe0de5ef5fba694838-rootfs.mount: Deactivated successfully.
Jan 17 12:15:29.588364 containerd[2081]: time="2025-01-17T12:15:29.588251984Z" level=info msg="shim disconnected" id=a3a08ce3a29ce3f7536ac698b6fc1ab70cd06b131799fcfe0de5ef5fba694838 namespace=k8s.io
Jan 17 12:15:29.588364 containerd[2081]: time="2025-01-17T12:15:29.588350002Z" level=warning msg="cleaning up after shim disconnected" id=a3a08ce3a29ce3f7536ac698b6fc1ab70cd06b131799fcfe0de5ef5fba694838 namespace=k8s.io
Jan 17 12:15:29.588364 containerd[2081]: time="2025-01-17T12:15:29.588363779Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:15:30.299550 containerd[2081]: time="2025-01-17T12:15:30.299507549Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Jan 17 12:15:30.340543 containerd[2081]: time="2025-01-17T12:15:30.340495985Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c0ccb0646631244753c19dd81f0790f20148e688668878e2f2eebeb90b155f51\""
Jan 17 12:15:30.342188 containerd[2081]: time="2025-01-17T12:15:30.342147068Z" level=info msg="StartContainer for \"c0ccb0646631244753c19dd81f0790f20148e688668878e2f2eebeb90b155f51\""
Jan 17 12:15:30.448879 containerd[2081]: time="2025-01-17T12:15:30.446877260Z" level=info msg="StartContainer for \"c0ccb0646631244753c19dd81f0790f20148e688668878e2f2eebeb90b155f51\" returns successfully"
Jan 17 12:15:30.483263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0ccb0646631244753c19dd81f0790f20148e688668878e2f2eebeb90b155f51-rootfs.mount: Deactivated successfully.
Jan 17 12:15:30.486878 containerd[2081]: time="2025-01-17T12:15:30.486803756Z" level=info msg="shim disconnected" id=c0ccb0646631244753c19dd81f0790f20148e688668878e2f2eebeb90b155f51 namespace=k8s.io
Jan 17 12:15:30.486878 containerd[2081]: time="2025-01-17T12:15:30.486867170Z" level=warning msg="cleaning up after shim disconnected" id=c0ccb0646631244753c19dd81f0790f20148e688668878e2f2eebeb90b155f51 namespace=k8s.io
Jan 17 12:15:30.486878 containerd[2081]: time="2025-01-17T12:15:30.486880328Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:15:31.290041 containerd[2081]: time="2025-01-17T12:15:31.289948798Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Jan 17 12:15:31.340147 containerd[2081]: time="2025-01-17T12:15:31.340098899Z" level=info msg="CreateContainer within sandbox \"149d6bb506674b3f92595c1afaf71fe5efc6ec3f09b386ac0205d498ce09e581\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"810aa9559b3567802ecb89e20122effafead374658d7a23cb74d0fdfee9f281a\""
Jan 17 12:15:31.340884 containerd[2081]: time="2025-01-17T12:15:31.340780503Z" level=info msg="StartContainer for \"810aa9559b3567802ecb89e20122effafead374658d7a23cb74d0fdfee9f281a\""
Jan 17 12:15:31.424103 containerd[2081]: time="2025-01-17T12:15:31.423937798Z" level=info msg="StartContainer for \"810aa9559b3567802ecb89e20122effafead374658d7a23cb74d0fdfee9f281a\" returns successfully"
Jan 17 12:15:32.312770 containerd[2081]: time="2025-01-17T12:15:32.312723040Z" level=info msg="StopPodSandbox for \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\""
Jan 17 12:15:32.313048 containerd[2081]: time="2025-01-17T12:15:32.312921234Z" level=info msg="TearDown network for sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" successfully"
Jan 17 12:15:32.313048 containerd[2081]: time="2025-01-17T12:15:32.312942121Z" level=info msg="StopPodSandbox for \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" returns successfully"
Jan 17 12:15:32.316005 containerd[2081]: time="2025-01-17T12:15:32.313647969Z" level=info msg="RemovePodSandbox for \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\""
Jan 17 12:15:32.322650 containerd[2081]: time="2025-01-17T12:15:32.322480629Z" level=info msg="Forcibly stopping sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\""
Jan 17 12:15:32.322818 containerd[2081]: time="2025-01-17T12:15:32.322718758Z" level=info msg="TearDown network for sandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" successfully"
Jan 17 12:15:32.324779 kubelet[3681]: I0117 12:15:32.324486    3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-92qh7" podStartSLOduration=6.324432944 podStartE2EDuration="6.324432944s" podCreationTimestamp="2025-01-17 12:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:15:32.314820867 +0000 UTC m=+120.319362095" watchObservedRunningTime="2025-01-17 12:15:32.324432944 +0000 UTC m=+120.328974177"
Jan 17 12:15:32.328744 containerd[2081]: time="2025-01-17T12:15:32.328701448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 17 12:15:32.328874 containerd[2081]: time="2025-01-17T12:15:32.328776210Z" level=info msg="RemovePodSandbox \"4ddced8a80b865b1bc34dcb8a411f14c59e61f8431765907d8ba4681ba54302e\" returns successfully"
Jan 17 12:15:32.329719 containerd[2081]: time="2025-01-17T12:15:32.329684958Z" level=info msg="StopPodSandbox for \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\""
Jan 17 12:15:32.329815 containerd[2081]: time="2025-01-17T12:15:32.329781726Z" level=info msg="TearDown network for sandbox \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\" successfully"
Jan 17 12:15:32.329815 containerd[2081]: time="2025-01-17T12:15:32.329800076Z" level=info msg="StopPodSandbox for \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\" returns successfully"
Jan 17 12:15:32.330185 containerd[2081]: time="2025-01-17T12:15:32.330157154Z" level=info msg="RemovePodSandbox for \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\""
Jan 17 12:15:32.330274 containerd[2081]: time="2025-01-17T12:15:32.330186162Z" level=info msg="Forcibly stopping sandbox \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\""
Jan 17 12:15:32.330274 containerd[2081]: time="2025-01-17T12:15:32.330245711Z" level=info msg="TearDown network for sandbox \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\" successfully"
Jan 17 12:15:32.342266 containerd[2081]: time="2025-01-17T12:15:32.342217463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 17 12:15:32.343794 containerd[2081]: time="2025-01-17T12:15:32.342287544Z" level=info msg="RemovePodSandbox \"0148f453833013382683aa84e94e4ffc73809b0db81a636125599870bc155bf6\" returns successfully"
Jan 17 12:15:33.274122 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Jan 17 12:15:34.524503 systemd[1]: run-containerd-runc-k8s.io-810aa9559b3567802ecb89e20122effafead374658d7a23cb74d0fdfee9f281a-runc.s2I41m.mount: Deactivated successfully.
Jan 17 12:15:36.752848 systemd-networkd[1644]: lxc_health: Link UP
Jan 17 12:15:36.767226 systemd-networkd[1644]: lxc_health: Gained carrier
Jan 17 12:15:36.808029 (udev-worker)[6361]: Network interface NamePolicy= disabled on kernel command line.
Jan 17 12:15:38.767146 systemd-networkd[1644]: lxc_health: Gained IPv6LL
Jan 17 12:15:39.424070 systemd[1]: run-containerd-runc-k8s.io-810aa9559b3567802ecb89e20122effafead374658d7a23cb74d0fdfee9f281a-runc.e9hKx6.mount: Deactivated successfully.
Jan 17 12:15:41.395745 ntpd[2038]: Listen normally on 13 lxc_health [fe80::24f8:f7ff:fe44:22be%14]:123
Jan 17 12:15:41.396334 ntpd[2038]: 17 Jan 12:15:41 ntpd[2038]: Listen normally on 13 lxc_health [fe80::24f8:f7ff:fe44:22be%14]:123
Jan 17 12:15:41.693835 systemd[1]: run-containerd-runc-k8s.io-810aa9559b3567802ecb89e20122effafead374658d7a23cb74d0fdfee9f281a-runc.mQhkx2.mount: Deactivated successfully.
Jan 17 12:15:41.804285 sshd[5528]: pam_unix(sshd:session): session closed for user core
Jan 17 12:15:41.812549 systemd-logind[2055]: Session 27 logged out. Waiting for processes to exit.
Jan 17 12:15:41.814024 systemd[1]: sshd@26-172.31.29.55:22-139.178.89.65:53408.service: Deactivated successfully.
Jan 17 12:15:41.826303 systemd[1]: session-27.scope: Deactivated successfully.
Jan 17 12:15:41.832782 systemd-logind[2055]: Removed session 27.
Jan 17 12:15:58.565333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfaf13b5a453f9cd90e533b0ff4f68b146c40ae6071a3ba2af37b590a6e01f03-rootfs.mount: Deactivated successfully.
Jan 17 12:15:58.600400 containerd[2081]: time="2025-01-17T12:15:58.600329252Z" level=info msg="shim disconnected" id=bfaf13b5a453f9cd90e533b0ff4f68b146c40ae6071a3ba2af37b590a6e01f03 namespace=k8s.io
Jan 17 12:15:58.600400 containerd[2081]: time="2025-01-17T12:15:58.600394002Z" level=warning msg="cleaning up after shim disconnected" id=bfaf13b5a453f9cd90e533b0ff4f68b146c40ae6071a3ba2af37b590a6e01f03 namespace=k8s.io
Jan 17 12:15:58.601071 containerd[2081]: time="2025-01-17T12:15:58.600407164Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:15:59.385770 kubelet[3681]: I0117 12:15:59.382798    3681 scope.go:117] "RemoveContainer" containerID="bfaf13b5a453f9cd90e533b0ff4f68b146c40ae6071a3ba2af37b590a6e01f03"
Jan 17 12:15:59.398148 containerd[2081]: time="2025-01-17T12:15:59.398093334Z" level=info msg="CreateContainer within sandbox \"a794c1596d18e60bfae1f3f1facd076539b69fb9ffb9f1aeb0b85d8faaa4577e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Jan 17 12:15:59.456555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount364635428.mount: Deactivated successfully.
Jan 17 12:15:59.458337 containerd[2081]: time="2025-01-17T12:15:59.457363860Z" level=info msg="CreateContainer within sandbox \"a794c1596d18e60bfae1f3f1facd076539b69fb9ffb9f1aeb0b85d8faaa4577e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"eb4d1121bb9746f80602e1a63a52b99d8466e6eecd698e061651a5c5e966cd82\""
Jan 17 12:15:59.460712 containerd[2081]: time="2025-01-17T12:15:59.460675900Z" level=info msg="StartContainer for \"eb4d1121bb9746f80602e1a63a52b99d8466e6eecd698e061651a5c5e966cd82\""
Jan 17 12:15:59.601766 containerd[2081]: time="2025-01-17T12:15:59.601718543Z" level=info msg="StartContainer for \"eb4d1121bb9746f80602e1a63a52b99d8466e6eecd698e061651a5c5e966cd82\" returns successfully"
Jan 17 12:16:03.629056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1496bf90523fa5d32765ffc997cc240aaaa0bf5707aef6553330e7c2545821a2-rootfs.mount: Deactivated successfully.
Jan 17 12:16:03.649125 containerd[2081]: time="2025-01-17T12:16:03.648761028Z" level=info msg="shim disconnected" id=1496bf90523fa5d32765ffc997cc240aaaa0bf5707aef6553330e7c2545821a2 namespace=k8s.io
Jan 17 12:16:03.655132 containerd[2081]: time="2025-01-17T12:16:03.655007452Z" level=warning msg="cleaning up after shim disconnected" id=1496bf90523fa5d32765ffc997cc240aaaa0bf5707aef6553330e7c2545821a2 namespace=k8s.io
Jan 17 12:16:03.655132 containerd[2081]: time="2025-01-17T12:16:03.655124496Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 17 12:16:04.408410 kubelet[3681]: I0117 12:16:04.408378    3681 scope.go:117] "RemoveContainer" containerID="1496bf90523fa5d32765ffc997cc240aaaa0bf5707aef6553330e7c2545821a2"
Jan 17 12:16:04.412947 containerd[2081]: time="2025-01-17T12:16:04.412858679Z" level=info msg="CreateContainer within sandbox \"087bcc1e74f9b52de71a30855846f3bbd9976e1ad6d6f6867faab48c5daf3b50\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Jan 17 12:16:04.474005 containerd[2081]: time="2025-01-17T12:16:04.469328298Z" level=info msg="CreateContainer within sandbox \"087bcc1e74f9b52de71a30855846f3bbd9976e1ad6d6f6867faab48c5daf3b50\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"21c7e927cb6c0b2eb2370942d7d07ea621e0fb468b935016b2c9e5dbd33ab413\""
Jan 17 12:16:04.474894 containerd[2081]: time="2025-01-17T12:16:04.474841994Z" level=info msg="StartContainer for \"21c7e927cb6c0b2eb2370942d7d07ea621e0fb468b935016b2c9e5dbd33ab413\""
Jan 17 12:16:04.638448 containerd[2081]: time="2025-01-17T12:16:04.638402904Z" level=info msg="StartContainer for \"21c7e927cb6c0b2eb2370942d7d07ea621e0fb468b935016b2c9e5dbd33ab413\" returns successfully"
Jan 17 12:16:04.761257 kubelet[3681]: E0117 12:16:04.757540    3681 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-55?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"