Jun 25 16:27:20.954671 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:27:20.954705 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:27:20.954720 kernel: BIOS-provided physical RAM map: Jun 25 16:27:20.954731 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 16:27:20.954742 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 16:27:20.954753 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 16:27:20.954769 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jun 25 16:27:20.954781 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jun 25 16:27:20.954792 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jun 25 16:27:20.954803 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 16:27:20.954815 kernel: NX (Execute Disable) protection: active Jun 25 16:27:20.954826 kernel: SMBIOS 2.7 present. Jun 25 16:27:20.954837 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jun 25 16:27:20.954849 kernel: Hypervisor detected: KVM Jun 25 16:27:20.954866 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:27:20.954879 kernel: kvm-clock: using sched offset of 7621234513 cycles Jun 25 16:27:20.954892 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:27:20.954905 kernel: tsc: Detected 2499.996 MHz processor Jun 25 16:27:20.954918 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:27:20.954931 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:27:20.954944 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jun 25 16:27:20.954959 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:27:20.954972 kernel: Using GB pages for direct mapping Jun 25 16:27:20.955005 kernel: ACPI: Early table checksum verification disabled Jun 25 16:27:20.955016 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jun 25 16:27:20.955028 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jun 25 16:27:20.955040 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 25 16:27:20.955051 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jun 25 16:27:20.955063 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jun 25 16:27:20.955080 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 16:27:20.955092 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 25 16:27:20.955103 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jun 25 16:27:20.955116 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 25 16:27:20.955128 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jun 25 16:27:20.955140 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jun 25 16:27:20.955152 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 16:27:20.955164 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jun 25 16:27:20.955175 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jun 25 16:27:20.955252 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jun 25 16:27:20.955265 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jun 25 16:27:20.955284 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jun 25 16:27:20.955297 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jun 25 16:27:20.955308 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jun 25 16:27:20.955322 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jun 25 16:27:20.955335 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jun 25 16:27:20.955348 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jun 25 16:27:20.955362 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:27:20.955374 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 16:27:20.955387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jun 25 16:27:20.955401 kernel: NUMA: Initialized distance table, cnt=1 Jun 25 16:27:20.955414 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jun 25 16:27:20.955427 kernel: Zone ranges: Jun 25 16:27:20.955440 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:27:20.955454 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jun 25 16:27:20.955466 kernel: Normal empty Jun 25 16:27:20.955479 kernel: Movable zone start for each node Jun 25 16:27:20.955492 kernel: Early memory node ranges Jun 25 16:27:20.955505 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 16:27:20.955518 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jun 25 16:27:20.955530 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jun 25 16:27:20.955541 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:27:20.955553 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 16:27:20.955568 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jun 25 16:27:20.955579 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 16:27:20.955909 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:27:20.955924 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jun 25 16:27:20.955936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:27:20.956026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:27:20.956042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:27:20.956056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:27:20.956069 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:27:20.956087 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 16:27:20.956099 kernel: TSC deadline timer available Jun 25 16:27:20.956113 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 16:27:20.956125 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jun 25 16:27:20.956139 kernel: Booting paravirtualized kernel on KVM Jun 25 16:27:20.956151 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:27:20.956164 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 16:27:20.956243 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jun 25 16:27:20.956258 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jun 25 16:27:20.956274 kernel: pcpu-alloc: [0] 0 1 Jun 25 16:27:20.956397 kernel: kvm-guest: PV spinlocks enabled Jun 25 16:27:20.956414 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:27:20.956426 kernel: Fallback order for Node 0: 0 Jun 25 16:27:20.956440 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jun 25 16:27:20.956454 kernel: Policy zone: DMA32 Jun 25 16:27:20.956470 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:27:20.956484 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:27:20.956502 kernel: random: crng init done Jun 25 16:27:20.956515 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:27:20.956529 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:27:20.956542 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:27:20.956556 kernel: Memory: 1928268K/2057760K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 129232K reserved, 0K cma-reserved) Jun 25 16:27:20.956570 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 16:27:20.956582 kernel: Kernel/User page tables isolation: enabled Jun 25 16:27:20.956596 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:27:20.956609 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:27:20.956626 kernel: Dynamic Preempt: voluntary Jun 25 16:27:20.956640 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:27:20.956652 kernel: rcu: RCU event tracing is enabled. Jun 25 16:27:20.956666 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 16:27:20.956679 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:27:20.956691 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:27:20.956705 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:27:20.956719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:27:20.956732 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 16:27:20.956749 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 16:27:20.956857 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:27:20.956915 kernel: Console: colour VGA+ 80x25 Jun 25 16:27:20.956929 kernel: printk: console [ttyS0] enabled Jun 25 16:27:20.956943 kernel: ACPI: Core revision 20220331 Jun 25 16:27:20.956957 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jun 25 16:27:20.956972 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:27:20.956999 kernel: x2apic enabled Jun 25 16:27:20.957013 kernel: Switched APIC routing to physical x2apic. Jun 25 16:27:20.957027 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jun 25 16:27:20.957044 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jun 25 16:27:20.957056 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 16:27:20.957078 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 16:27:20.957095 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:27:20.957109 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:27:20.957123 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:27:20.957136 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:27:20.957148 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 16:27:20.957164 kernel: RETBleed: Vulnerable Jun 25 16:27:20.957176 kernel: Speculative Store Bypass: Vulnerable Jun 25 16:27:20.957188 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:27:20.957201 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:27:20.957215 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 16:27:20.957232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:27:20.957246 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:27:20.957263 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:27:20.957276 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jun 25 16:27:20.957288 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jun 25 16:27:20.957305 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 16:27:20.957319 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 16:27:20.957332 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 16:27:20.957346 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 25 16:27:20.957362 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:27:20.957381 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jun 25 16:27:20.957394 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jun 25 16:27:20.957407 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jun 25 16:27:20.957419 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jun 25 16:27:20.957432 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jun 25 16:27:20.957447 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jun 25 16:27:20.957462 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jun 25 16:27:20.957480 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:27:20.957495 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:27:20.957509 kernel: LSM: Security Framework initializing Jun 25 16:27:20.957524 kernel: SELinux: Initializing. Jun 25 16:27:20.957539 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:27:20.957554 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:27:20.957567 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 16:27:20.957581 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:27:20.957594 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:27:20.957618 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:27:20.957633 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:27:20.957650 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:27:20.957665 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:27:20.957677 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 16:27:20.957691 kernel: signal: max sigframe size: 3632 Jun 25 16:27:20.957705 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:27:20.957720 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:27:20.957735 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:27:20.957749 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:27:20.957764 kernel: x86: Booting SMP configuration: Jun 25 16:27:20.957779 kernel: .... node #0, CPUs: #1 Jun 25 16:27:20.957799 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 25 16:27:20.957816 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 16:27:20.957831 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:27:20.957847 kernel: smpboot: Max logical packages: 1 Jun 25 16:27:20.957860 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jun 25 16:27:20.957874 kernel: devtmpfs: initialized Jun 25 16:27:20.958005 kernel: x86/mm: Memory block size: 128MB Jun 25 16:27:20.958024 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:27:20.958045 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 16:27:20.958060 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:27:20.958075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:27:20.958090 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:27:20.958106 kernel: audit: type=2000 audit(1719332840.404:1): state=initialized audit_enabled=0 res=1 Jun 25 16:27:20.958121 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:27:20.958136 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:27:20.958152 kernel: cpuidle: using governor menu Jun 25 16:27:20.958167 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:27:20.958186 kernel: dca service started, version 1.12.1 Jun 25 16:27:20.958201 kernel: PCI: Using configuration type 1 for base access Jun 25 16:27:20.958216 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:27:20.958265 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:27:20.958280 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:27:20.958295 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:27:20.958311 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:27:20.958325 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:27:20.958341 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:27:20.958360 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:27:20.958375 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:27:20.958391 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 25 16:27:20.958406 kernel: ACPI: Interpreter enabled Jun 25 16:27:20.958421 kernel: ACPI: PM: (supports S0 S5) Jun 25 16:27:20.958437 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:27:20.958452 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:27:20.958468 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:27:20.958484 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 25 16:27:20.958503 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:27:20.958704 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:27:20.958857 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 16:27:20.959163 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jun 25 16:27:20.959187 kernel: acpiphp: Slot [3] registered Jun 25 16:27:20.959204 kernel: acpiphp: Slot [4] registered Jun 25 16:27:20.959218 kernel: acpiphp: Slot [5] registered Jun 25 16:27:20.959255 kernel: acpiphp: Slot [6] registered Jun 25 16:27:20.959270 kernel: acpiphp: Slot [7] registered Jun 25 16:27:20.959286 kernel: acpiphp: Slot [8] registered Jun 25 16:27:20.959300 kernel: acpiphp: Slot [9] registered Jun 25 16:27:20.959315 kernel: acpiphp: Slot [10] registered Jun 25 16:27:20.959331 kernel: acpiphp: Slot [11] registered Jun 25 16:27:20.959346 kernel: acpiphp: Slot [12] registered Jun 25 16:27:20.959361 kernel: acpiphp: Slot [13] registered Jun 25 16:27:20.959376 kernel: acpiphp: Slot [14] registered Jun 25 16:27:20.959392 kernel: acpiphp: Slot [15] registered Jun 25 16:27:20.959410 kernel: acpiphp: Slot [16] registered Jun 25 16:27:20.959425 kernel: acpiphp: Slot [17] registered Jun 25 16:27:20.959440 kernel: acpiphp: Slot [18] registered Jun 25 16:27:20.959455 kernel: acpiphp: Slot [19] registered Jun 25 16:27:20.959471 kernel: acpiphp: Slot [20] registered Jun 25 16:27:20.959486 kernel: acpiphp: Slot [21] registered Jun 25 16:27:20.959501 kernel: acpiphp: Slot [22] registered Jun 25 16:27:20.959517 kernel: acpiphp: Slot [23] registered Jun 25 16:27:20.959532 kernel: acpiphp: Slot [24] registered Jun 25 16:27:20.959620 kernel: acpiphp: Slot [25] registered Jun 25 16:27:20.959637 kernel: acpiphp: Slot [26] registered Jun 25 16:27:20.959652 kernel: acpiphp: Slot [27] registered Jun 25 16:27:20.959703 kernel: acpiphp: Slot [28] registered Jun 25 16:27:20.959719 kernel: acpiphp: Slot [29] registered Jun 25 16:27:20.959734 kernel: acpiphp: Slot [30] registered Jun 25 16:27:20.959749 kernel: acpiphp: Slot [31] registered Jun 25 16:27:20.959764 kernel: PCI host bridge to bus 0000:00 Jun 25 16:27:20.960036 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:27:20.960327 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:27:20.960457 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:27:20.960666 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 16:27:20.960797 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:27:20.961014 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:27:20.961163 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:27:20.961352 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jun 25 16:27:20.961536 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 16:27:20.961832 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 16:27:20.961961 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jun 25 16:27:20.962100 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jun 25 16:27:20.962281 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jun 25 16:27:20.962411 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jun 25 16:27:20.962532 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jun 25 16:27:20.962661 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jun 25 16:27:20.962865 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 10742 usecs Jun 25 16:27:20.963029 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jun 25 16:27:20.963158 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jun 25 16:27:20.963394 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 25 16:27:20.963586 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:27:20.963747 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 25 16:27:20.963939 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jun 25 16:27:20.964153 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 25 16:27:20.964293 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jun 25 16:27:20.964316 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:27:20.964336 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:27:20.964353 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:27:20.964370 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:27:20.964392 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:27:20.964408 kernel: iommu: Default domain type: Translated Jun 25 16:27:20.964425 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:27:20.964442 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:27:20.964460 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:27:20.964478 kernel: PTP clock support registered Jun 25 16:27:20.964551 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:27:20.964568 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:27:20.964582 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 16:27:20.964600 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jun 25 16:27:20.964785 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jun 25 16:27:20.964967 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jun 25 16:27:20.965108 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:27:20.965125 kernel: vgaarb: loaded Jun 25 16:27:20.965138 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jun 25 16:27:20.965151 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jun 25 16:27:20.965162 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:27:20.965181 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:27:20.965195 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:27:20.965208 kernel: pnp: PnP ACPI init Jun 25 16:27:20.965221 kernel: pnp: PnP ACPI: found 5 devices Jun 25 16:27:20.965234 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:27:20.965247 kernel: NET: Registered PF_INET protocol family Jun 25 16:27:20.965260 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:27:20.965273 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 16:27:20.965286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:27:20.965302 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:27:20.965315 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 16:27:20.965327 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 16:27:20.965339 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:27:20.965352 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:27:20.965366 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:27:20.965377 kernel: NET: Registered PF_XDP protocol family Jun 25 16:27:20.965537 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:27:20.965753 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:27:20.965860 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:27:20.965963 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 16:27:20.966099 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:27:20.966117 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:27:20.966130 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:27:20.966143 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jun 25 16:27:20.966156 kernel: clocksource: Switched to clocksource tsc Jun 25 16:27:20.966173 kernel: Initialise system trusted keyrings Jun 25 16:27:20.966186 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 16:27:20.966199 kernel: Key type asymmetric registered Jun 25 16:27:20.966211 kernel: Asymmetric key parser 'x509' registered Jun 25 16:27:20.966223 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:27:20.966236 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:27:20.966249 kernel: io scheduler mq-deadline registered Jun 25 16:27:20.966262 kernel: io scheduler kyber registered Jun 25 16:27:20.966275 kernel: io scheduler bfq registered Jun 25 16:27:20.966291 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:27:20.966304 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:27:20.966317 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:27:20.966330 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:27:20.966343 kernel: i8042: Warning: Keylock active Jun 25 16:27:20.966356 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:27:20.966369 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:27:20.966560 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 25 16:27:20.966674 kernel: rtc_cmos 00:00: registered as rtc0 Jun 25 16:27:20.966784 kernel: rtc_cmos 00:00: setting system clock to 2024-06-25T16:27:20 UTC (1719332840) Jun 25 16:27:20.966889 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 25 16:27:20.966905 kernel: intel_pstate: CPU model not supported Jun 25 16:27:20.966917 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:27:20.966930 kernel: Segment Routing with IPv6 Jun 25 16:27:20.966943 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:27:20.966956 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:27:20.966969 kernel: Key type dns_resolver registered Jun 25 16:27:20.966998 kernel: IPI shorthand broadcast: enabled Jun 25 16:27:20.967011 kernel: sched_clock: Marking stable (533508356, 213176236)->(830315774, -83631182) Jun 25 16:27:20.967024 kernel: registered taskstats version 1 Jun 25 16:27:20.967039 kernel: Loading compiled-in X.509 certificates Jun 25 16:27:20.967053 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:27:20.967067 kernel: Key type .fscrypt registered Jun 25 16:27:20.967081 kernel: Key type fscrypt-provisioning registered Jun 25 16:27:20.967096 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:27:20.967111 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:27:20.967129 kernel: ima: No architecture policies found Jun 25 16:27:20.967144 kernel: clk: Disabling unused clocks Jun 25 16:27:20.967159 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:27:20.967174 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:27:20.967188 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:27:20.967203 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:27:20.967218 kernel: Run /init as init process Jun 25 16:27:20.967233 kernel: with arguments: Jun 25 16:27:20.967248 kernel: /init Jun 25 16:27:20.967266 kernel: with environment: Jun 25 16:27:20.967300 kernel: HOME=/ Jun 25 16:27:20.967318 kernel: TERM=linux Jun 25 16:27:20.967333 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:27:20.967351 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:27:20.967369 systemd[1]: Detected virtualization amazon. Jun 25 16:27:20.967386 systemd[1]: Detected architecture x86-64. Jun 25 16:27:20.967404 systemd[1]: Running in initrd. Jun 25 16:27:20.967420 systemd[1]: No hostname configured, using default hostname. Jun 25 16:27:20.967436 systemd[1]: Hostname set to . Jun 25 16:27:20.967453 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:27:20.967469 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:27:20.967485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:27:20.967502 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:27:20.967518 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:27:20.967536 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:27:20.967552 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:27:20.967568 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:27:20.967585 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:27:20.967602 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:27:20.967618 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:27:20.967634 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:27:20.967653 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:27:20.967670 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:27:20.967734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:27:20.967758 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:27:20.967776 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:27:20.967792 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:27:20.967809 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:27:20.967826 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:27:20.967843 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:27:20.967862 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:27:20.967878 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:27:20.967895 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:27:20.967912 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:27:20.967930 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:27:20.968014 systemd-journald[180]: Journal started Jun 25 16:27:20.968090 systemd-journald[180]: Runtime Journal (/run/log/journal/ec243846d2c45270951b49b75b894dd9) is 4.8M, max 38.6M, 33.8M free. Jun 25 16:27:21.010024 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:27:20.986853 systemd-modules-load[181]: Inserted module 'overlay' Jun 25 16:27:21.135649 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:27:21.135676 kernel: Bridge firewalling registered Jun 25 16:27:21.135688 kernel: SCSI subsystem initialized Jun 25 16:27:21.135699 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:27:21.135711 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:27:21.135722 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:27:21.036030 systemd-modules-load[181]: Inserted module 'br_netfilter' Jun 25 16:27:21.101724 systemd-modules-load[181]: Inserted module 'dm_multipath' Jun 25 16:27:21.141806 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:27:21.141838 kernel: audit: type=1130 audit(1719332841.137:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.141400 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:27:21.146022 kernel: audit: type=1130 audit(1719332841.140:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.149435 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:27:21.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.152450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:27:21.157078 kernel: audit: type=1130 audit(1719332841.150:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.160007 kernel: audit: type=1130 audit(1719332841.155:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.161170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:27:21.164296 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:27:21.165968 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:27:21.179630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:27:21.195086 kernel: audit: type=1130 audit(1719332841.178:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.195167 kernel: audit: type=1130 audit(1719332841.180:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.195187 kernel: audit: type=1334 audit(1719332841.184:8): prog-id=6 op=LOAD Jun 25 16:27:21.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.184000 audit: BPF prog-id=6 op=LOAD Jun 25 16:27:21.180915 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:27:21.199236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:27:21.201871 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:27:21.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.205771 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:27:21.218615 kernel: audit: type=1130 audit(1719332841.202:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.256081 dracut-cmdline[207]: dracut-dracut-053 Jun 25 16:27:21.262821 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:27:21.309919 systemd-resolved[203]: Positive Trust Anchors: Jun 25 16:27:21.309944 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:27:21.310005 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:27:21.323795 systemd-resolved[203]: Defaulting to hostname 'linux'. Jun 25 16:27:21.327203 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:27:21.331007 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:27:21.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.337342 kernel: audit: type=1130 audit(1719332841.328:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.371008 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:27:21.385116 kernel: iscsi: registered transport (tcp) Jun 25 16:27:21.414157 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:27:21.414232 kernel: QLogic iSCSI HBA Driver Jun 25 16:27:21.455303 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:27:21.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.464294 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:27:21.541038 kernel: raid6: avx512x4 gen() 16917 MB/s Jun 25 16:27:21.558027 kernel: raid6: avx512x2 gen() 15801 MB/s Jun 25 16:27:21.575020 kernel: raid6: avx512x1 gen() 17129 MB/s Jun 25 16:27:21.592033 kernel: raid6: avx2x4 gen() 16438 MB/s Jun 25 16:27:21.609033 kernel: raid6: avx2x2 gen() 15555 MB/s Jun 25 16:27:21.626031 kernel: raid6: avx2x1 gen() 12331 MB/s Jun 25 16:27:21.626103 kernel: raid6: using algorithm avx512x1 gen() 17129 MB/s Jun 25 16:27:21.643017 kernel: raid6: .... xor() 19639 MB/s, rmw enabled Jun 25 16:27:21.643084 kernel: raid6: using avx512x2 recovery algorithm Jun 25 16:27:21.646014 kernel: xor: automatically using best checksumming function avx Jun 25 16:27:21.887029 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:27:21.898899 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:27:21.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.897000 audit: BPF prog-id=7 op=LOAD Jun 25 16:27:21.897000 audit: BPF prog-id=8 op=LOAD Jun 25 16:27:21.903238 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:27:21.931628 systemd-udevd[383]: Using default interface naming scheme 'v252'. Jun 25 16:27:21.939088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:27:21.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:21.946339 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:27:21.974846 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation Jun 25 16:27:22.048891 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:27:22.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:22.060285 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:27:22.159776 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:27:22.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:22.230006 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:27:22.259009 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:27:22.259070 kernel: AES CTR mode by8 optimization enabled Jun 25 16:27:22.268011 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 25 16:27:22.271588 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 25 16:27:22.271740 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jun 25 16:27:22.271864 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:5e:ac:39:2b:17 Jun 25 16:27:22.273899 (udev-worker)[432]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:27:22.406006 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 25 16:27:22.406186 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 16:27:22.406207 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 25 16:27:22.406332 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:27:22.406345 kernel: GPT:9289727 != 16777215 Jun 25 16:27:22.406360 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:27:22.406370 kernel: GPT:9289727 != 16777215 Jun 25 16:27:22.406380 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:27:22.406393 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:27:22.430020 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (437) Jun 25 16:27:22.442764 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 25 16:27:22.474021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 16:27:22.478037 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (438) Jun 25 16:27:22.514227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 25 16:27:22.543677 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 25 16:27:22.543800 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 25 16:27:22.562209 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:27:22.572519 disk-uuid[598]: Primary Header is updated. Jun 25 16:27:22.572519 disk-uuid[598]: Secondary Entries is updated. Jun 25 16:27:22.572519 disk-uuid[598]: Secondary Header is updated. Jun 25 16:27:22.581555 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:27:22.600012 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:27:22.608012 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:27:23.606344 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:27:23.606411 disk-uuid[599]: The operation has completed successfully. Jun 25 16:27:23.805766 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:27:23.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:23.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:23.805908 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:27:23.811316 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:27:23.817082 sh[943]: Success Jun 25 16:27:23.841050 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:27:23.957208 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:27:23.966412 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:27:23.971415 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:27:23.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:23.999010 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:27:23.999071 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:27:24.000168 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:27:24.000201 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:27:24.001998 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:27:24.068013 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 16:27:24.079756 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:27:24.081882 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:27:24.089397 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:27:24.091458 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:27:24.136994 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:24.137094 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:27:24.137166 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 16:27:24.165199 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 16:27:24.186665 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:27:24.189268 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:24.198612 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:27:24.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.208229 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:27:24.240737 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:27:24.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.243000 audit: BPF prog-id=9 op=LOAD Jun 25 16:27:24.256234 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:27:24.312351 systemd-networkd[1133]: lo: Link UP Jun 25 16:27:24.312365 systemd-networkd[1133]: lo: Gained carrier Jun 25 16:27:24.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.313167 systemd-networkd[1133]: Enumeration completed Jun 25 16:27:24.313329 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:27:24.313713 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:27:24.313719 systemd-networkd[1133]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:27:24.316029 systemd[1]: Reached target network.target - Network. Jun 25 16:27:24.325054 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:27:24.329705 systemd-networkd[1133]: eth0: Link UP Jun 25 16:27:24.330450 systemd-networkd[1133]: eth0: Gained carrier Jun 25 16:27:24.330976 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:27:24.331822 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:27:24.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.336868 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:27:24.342677 iscsid[1139]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:27:24.342677 iscsid[1139]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:27:24.342677 iscsid[1139]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:27:24.342677 iscsid[1139]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:27:24.358270 iscsid[1139]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:27:24.358270 iscsid[1139]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:27:24.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.344182 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:27:24.355195 systemd-networkd[1133]: eth0: DHCPv4 address 172.31.18.172/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 16:27:24.379003 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:27:24.415393 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:27:24.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.417091 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:27:24.420173 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:27:24.421270 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:27:24.437295 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:27:24.453321 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:27:24.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.508420 ignition[1093]: Ignition 2.15.0 Jun 25 16:27:24.508732 ignition[1093]: Stage: fetch-offline Jun 25 16:27:24.509677 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:24.509691 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:27:24.510240 ignition[1093]: Ignition finished successfully Jun 25 16:27:24.514637 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:27:24.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.519254 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 16:27:24.546131 ignition[1159]: Ignition 2.15.0 Jun 25 16:27:24.546146 ignition[1159]: Stage: fetch Jun 25 16:27:24.546482 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:24.546496 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:27:24.546619 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:27:24.555228 ignition[1159]: PUT result: OK Jun 25 16:27:24.558132 ignition[1159]: parsed url from cmdline: "" Jun 25 16:27:24.558144 ignition[1159]: no config URL provided Jun 25 16:27:24.558154 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:27:24.558169 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:27:24.558198 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:27:24.560474 ignition[1159]: PUT result: OK Jun 25 16:27:24.560547 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 25 16:27:24.561734 ignition[1159]: GET result: OK Jun 25 16:27:24.561813 ignition[1159]: parsing config with SHA512: cec40df1a69b3833f1cf1bb92748698705a6def561ed82a7af57d3c8078af5c0d87fce1f6947762343f0f48aab01dc674faf2280cb3660e85016fe40f0bd6152 Jun 25 16:27:24.592963 unknown[1159]: fetched base config from "system" Jun 25 16:27:24.593010 unknown[1159]: fetched base config from "system" Jun 25 16:27:24.593024 unknown[1159]: fetched user config from "aws" Jun 25 16:27:24.606699 ignition[1159]: fetch: fetch complete Jun 25 16:27:24.607584 ignition[1159]: fetch: fetch passed Jun 25 16:27:24.608494 ignition[1159]: Ignition finished successfully Jun 25 16:27:24.613233 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 16:27:24.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.621408 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:27:24.639950 ignition[1165]: Ignition 2.15.0 Jun 25 16:27:24.640264 ignition[1165]: Stage: kargs Jun 25 16:27:24.640630 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:24.640864 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:27:24.640999 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:27:24.647314 ignition[1165]: PUT result: OK Jun 25 16:27:24.651334 ignition[1165]: kargs: kargs passed Jun 25 16:27:24.651779 ignition[1165]: Ignition finished successfully Jun 25 16:27:24.654129 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:27:24.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.659287 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:27:24.678109 ignition[1171]: Ignition 2.15.0 Jun 25 16:27:24.678479 ignition[1171]: Stage: disks Jun 25 16:27:24.678913 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:24.678935 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:27:24.679074 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:27:24.683312 ignition[1171]: PUT result: OK Jun 25 16:27:24.686352 ignition[1171]: disks: disks passed Jun 25 16:27:24.686424 ignition[1171]: Ignition finished successfully Jun 25 16:27:24.688887 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:27:24.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.689194 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:27:24.693313 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:27:24.695497 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:27:24.696888 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:27:24.699652 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:27:24.706201 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:27:24.766879 systemd-fsck[1180]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 16:27:24.775345 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:27:24.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:24.780192 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:27:24.920047 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:27:24.920248 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:27:24.921482 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:27:24.934139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:27:24.939439 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:27:24.941484 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:27:24.941624 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:27:24.941661 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:27:24.946990 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:27:24.951396 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:27:24.969016 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1197) Jun 25 16:27:24.972439 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:24.972500 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:27:24.972519 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 16:27:24.981142 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 16:27:24.983647 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:27:25.202885 initrd-setup-root[1221]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:27:25.210137 initrd-setup-root[1228]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:27:25.221825 initrd-setup-root[1235]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:27:25.238880 initrd-setup-root[1242]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:27:25.503634 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:27:25.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:25.509441 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:27:25.509500 kernel: audit: type=1130 audit(1719332845.504:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:25.515719 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:27:25.522917 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:27:25.531223 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:27:25.532331 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:25.560025 ignition[1308]: INFO : Ignition 2.15.0 Jun 25 16:27:25.561907 ignition[1308]: INFO : Stage: mount Jun 25 16:27:25.561907 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:25.561907 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:27:25.561907 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:27:25.569482 ignition[1308]: INFO : PUT result: OK Jun 25 16:27:25.575869 ignition[1308]: INFO : mount: mount passed Jun 25 16:27:25.577034 ignition[1308]: INFO : Ignition finished successfully Jun 25 16:27:25.578853 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:27:25.584238 kernel: audit: type=1130 audit(1719332845.578:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:25.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:25.585457 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:27:25.605435 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:27:25.613825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:27:25.618066 kernel: audit: type=1130 audit(1719332845.613:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:25.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:25.633011 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1319) Jun 25 16:27:25.634746 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:25.634801 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:27:25.634819 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 16:27:25.641178 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 16:27:25.643440 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:27:25.686162 ignition[1337]: INFO : Ignition 2.15.0 Jun 25 16:27:25.686162 ignition[1337]: INFO : Stage: files Jun 25 16:27:25.689703 ignition[1337]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:25.689703 ignition[1337]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:27:25.689703 ignition[1337]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:27:25.694046 ignition[1337]: INFO : PUT result: OK Jun 25 16:27:25.698684 ignition[1337]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:27:25.700790 ignition[1337]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:27:25.700790 ignition[1337]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:27:25.706134 ignition[1337]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:27:25.708692 ignition[1337]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:27:25.710654 unknown[1337]: wrote ssh authorized keys file for user: core Jun 25 16:27:25.712148 ignition[1337]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:27:25.714213 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:27:25.716343 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:27:25.765653 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:27:25.909826 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 16:27:25.912087 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jun 25 16:27:25.999145 systemd-networkd[1133]: eth0: Gained IPv6LL Jun 25 16:27:26.248325 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:27:26.718192 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 16:27:26.718192 ignition[1337]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:27:26.723062 ignition[1337]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:27:26.725256 ignition[1337]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:27:26.725256 ignition[1337]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:27:26.728415 ignition[1337]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:27:26.728415 ignition[1337]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:27:26.731195 ignition[1337]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:27:26.732822 ignition[1337]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:27:26.734454 ignition[1337]: INFO : files: files passed Jun 25 16:27:26.734454 ignition[1337]: INFO : Ignition finished successfully Jun 25 16:27:26.737916 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:27:26.743231 kernel: audit: type=1130 audit(1719332846.738:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.747285 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:27:26.753354 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:27:26.755538 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:27:26.755630 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:27:26.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.764004 kernel: audit: type=1130 audit(1719332846.760:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.764039 kernel: audit: type=1131 audit(1719332846.760:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.773761 initrd-setup-root-after-ignition[1363]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:27:26.776374 initrd-setup-root-after-ignition[1367]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:27:26.779097 initrd-setup-root-after-ignition[1363]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:27:26.779109 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:27:26.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.782924 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:27:26.788719 kernel: audit: type=1130 audit(1719332846.781:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.797254 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:27:26.823958 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:27:26.824125 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:27:26.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.826480 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:27:26.832698 kernel: audit: type=1130 audit(1719332846.824:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.832763 kernel: audit: type=1131 audit(1719332846.824:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.832791 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:27:26.835068 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:27:26.841277 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:27:26.859521 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:27:26.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.864007 kernel: audit: type=1130 audit(1719332846.858:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.865285 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:27:26.877774 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:27:26.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.878028 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:27:26.878172 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:27:26.878384 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:27:26.878518 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:27:26.878903 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:27:26.922652 ignition[1381]: INFO : Ignition 2.15.0 Jun 25 16:27:26.922652 ignition[1381]: INFO : Stage: umount Jun 25 16:27:26.922652 ignition[1381]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:26.922652 ignition[1381]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:27:26.922652 ignition[1381]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:27:26.879103 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:27:26.929590 ignition[1381]: INFO : PUT result: OK Jun 25 16:27:26.879237 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:27:26.879395 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:27:26.879544 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:27:26.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.879772 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:27:26.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.942188 ignition[1381]: INFO : umount: umount passed Jun 25 16:27:26.942188 ignition[1381]: INFO : Ignition finished successfully Jun 25 16:27:26.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.879977 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:27:26.880196 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:27:26.880339 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:27:26.880558 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:27:26.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.880708 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:27:26.880831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:27:26.880948 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:27:26.881875 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:27:26.882051 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:27:26.882304 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:27:26.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.882804 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:27:26.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.883001 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:27:26.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.883268 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:27:26.883423 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:27:26.906300 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:27:26.929622 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:27:26.935264 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:27:26.936548 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:27:26.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.936894 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:27:26.938369 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:27:26.938465 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:27:26.941469 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:27:26.941558 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:27:26.947597 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:27:26.947687 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:27:26.959109 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:27:26.961163 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:27:26.963657 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:27:26.963723 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:27:26.965489 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 16:27:26.965536 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 16:27:26.966670 systemd[1]: Stopped target network.target - Network. Jun 25 16:27:26.968891 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:27:27.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.968940 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:27:26.970314 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:27:26.974444 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:27:26.979130 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:27:27.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:26.980301 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:27:26.982155 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:27:26.983191 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:27:26.983235 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:27:26.985674 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:27:26.985713 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:27:26.986936 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:27:26.987014 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:27:26.989161 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:27:26.997767 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:27:27.006427 systemd-networkd[1133]: eth0: DHCPv6 lease lost Jun 25 16:27:27.021921 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:27:27.022800 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:27:27.022932 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:27:27.028731 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:27:27.028857 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:27:27.032991 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:27:27.033085 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:27:27.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.050771 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:27:27.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.050862 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:27:27.051000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:27:27.051000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:27:27.053806 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:27:27.053840 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:27:27.057354 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:27:27.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.057417 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:27:27.071163 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:27:27.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.072233 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:27:27.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.072313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:27:27.073655 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:27:27.073704 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:27:27.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.075944 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:27:27.075999 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:27:27.077341 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:27:27.077397 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:27:27.084179 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:27:27.098725 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:27:27.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.098813 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:27:27.099435 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:27:27.099564 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:27:27.110875 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:27:27.110933 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:27:27.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.112423 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:27:27.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.112458 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:27:27.113634 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:27:27.113680 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:27:27.116426 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:27:27.116471 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:27:27.119117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:27:27.120098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:27:27.135115 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:27:27.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.136054 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:27:27.136194 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:27:27.137354 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:27:27.137392 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:27:27.138681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:27:27.138718 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:27:27.147734 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:27:27.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:27.148415 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:27:27.148514 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:27:27.151361 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:27:27.151580 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:27:27.154000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:27:27.167305 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:27:27.178700 systemd[1]: Switching root. Jun 25 16:27:27.211030 systemd-journald[180]: Received SIGTERM from PID 1 (n/a). Jun 25 16:27:27.211106 iscsid[1139]: iscsid shutting down. Jun 25 16:27:27.212786 systemd-journald[180]: Journal stopped Jun 25 16:27:28.792747 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:27:28.792823 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:27:28.792845 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:27:28.792871 kernel: SELinux: policy capability open_perms=1 Jun 25 16:27:28.792887 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:27:28.792911 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:27:28.792933 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:27:28.792950 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:27:28.793083 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:27:28.793111 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:27:28.793131 systemd[1]: Successfully loaded SELinux policy in 54.608ms. Jun 25 16:27:28.793158 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.083ms. Jun 25 16:27:28.793178 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:27:28.793198 systemd[1]: Detected virtualization amazon. Jun 25 16:27:28.793216 systemd[1]: Detected architecture x86-64. Jun 25 16:27:28.793238 systemd[1]: Detected first boot. Jun 25 16:27:28.793257 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:27:28.793278 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:27:28.793297 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:27:28.793315 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:27:28.793333 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:27:28.793357 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:27:28.793375 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:27:28.793394 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:27:28.793416 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:27:28.793437 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:27:28.793456 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:27:28.793475 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:27:28.793494 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:27:28.793513 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:27:28.793531 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:27:28.793549 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:27:28.793576 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:27:28.793597 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:27:28.793617 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:27:28.793635 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:27:28.793654 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:27:28.793673 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:27:28.793695 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:27:28.793713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:27:28.793732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:27:28.793752 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:27:28.793770 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:27:28.793789 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:27:28.793807 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:27:28.793825 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:27:28.793844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:27:28.793862 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:27:28.793947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:27:28.793966 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:27:28.794034 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:27:28.794229 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:27:28.794254 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:27:28.794304 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:28.794322 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:27:28.794340 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:27:28.794474 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:27:28.796087 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:27:28.796133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:27:28.796156 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:27:28.796178 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:27:28.796200 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:27:28.796222 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:27:28.796244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:27:28.796266 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:27:28.796287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:27:28.796308 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:27:28.796333 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:27:28.796355 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:27:28.796377 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:27:28.796519 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:27:28.796543 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:27:28.796565 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:27:28.796586 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:27:28.796608 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:27:28.796630 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:27:28.796655 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:27:28.796677 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:27:28.796698 systemd[1]: Stopped verity-setup.service. Jun 25 16:27:28.796720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:28.797187 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:27:28.797286 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:27:28.797327 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:27:28.797350 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:27:28.797375 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:27:28.797396 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:27:28.797419 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:27:28.797441 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:27:28.797462 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:27:28.797480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:27:28.797500 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:27:28.797521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:27:28.797544 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:27:28.797577 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:27:28.797599 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:27:28.797620 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:27:28.797641 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:27:28.797663 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:27:28.797687 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:27:28.797709 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:27:28.797730 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:27:28.797752 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:27:28.797773 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:27:28.797793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:27:28.797815 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:27:28.797836 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:27:28.797860 kernel: fuse: init (API version 7.37) Jun 25 16:27:28.797882 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:27:28.797902 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:27:28.797923 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:27:28.797945 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:27:28.797971 systemd-journald[1484]: Journal started Jun 25 16:27:28.798059 systemd-journald[1484]: Runtime Journal (/run/log/journal/ec243846d2c45270951b49b75b894dd9) is 4.8M, max 38.6M, 33.8M free. Jun 25 16:27:27.454000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:27:27.608000 audit: BPF prog-id=10 op=LOAD Jun 25 16:27:27.608000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:27:27.608000 audit: BPF prog-id=11 op=LOAD Jun 25 16:27:27.608000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:27:28.399000 audit: BPF prog-id=12 op=LOAD Jun 25 16:27:28.399000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:27:28.399000 audit: BPF prog-id=13 op=LOAD Jun 25 16:27:28.399000 audit: BPF prog-id=14 op=LOAD Jun 25 16:27:28.399000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:27:28.399000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:27:28.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.404000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:27:28.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.603000 audit: BPF prog-id=15 op=LOAD Jun 25 16:27:28.603000 audit: BPF prog-id=16 op=LOAD Jun 25 16:27:28.603000 audit: BPF prog-id=17 op=LOAD Jun 25 16:27:28.603000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:27:28.603000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:27:28.802720 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:27:28.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.783000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:27:28.783000 audit[1484]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff12038b50 a2=4000 a3=7fff12038bec items=0 ppid=1 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:28.783000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:27:28.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.390777 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:27:28.390790 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 25 16:27:28.402634 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:27:28.810003 kernel: loop: module loaded Jun 25 16:27:28.811319 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:27:28.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.814973 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:27:28.815243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:27:28.817846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:27:28.841307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:27:28.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.846192 systemd-journald[1484]: Time spent on flushing to /var/log/journal/ec243846d2c45270951b49b75b894dd9 is 104.576ms for 1087 entries. Jun 25 16:27:28.846192 systemd-journald[1484]: System Journal (/var/log/journal/ec243846d2c45270951b49b75b894dd9) is 8.0M, max 195.6M, 187.6M free. Jun 25 16:27:28.956641 systemd-journald[1484]: Received client request to flush runtime journal. Jun 25 16:27:28.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.882730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:27:28.888226 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:27:28.959957 udevadm[1508]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:27:28.958402 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:27:28.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.972010 kernel: ACPI: bus type drm_connector registered Jun 25 16:27:28.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.973327 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:27:28.973514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:27:28.991265 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:27:28.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:28.998225 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:27:29.040350 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:27:29.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:29.047221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:27:29.080259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:27:29.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:29.938797 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:27:29.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:29.938000 audit: BPF prog-id=18 op=LOAD Jun 25 16:27:29.938000 audit: BPF prog-id=19 op=LOAD Jun 25 16:27:29.938000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:27:29.938000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:27:29.947313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:27:29.979925 systemd-udevd[1524]: Using default interface naming scheme 'v252'. Jun 25 16:27:30.033843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:27:30.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.034000 audit: BPF prog-id=20 op=LOAD Jun 25 16:27:30.040181 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:27:30.057000 audit: BPF prog-id=21 op=LOAD Jun 25 16:27:30.057000 audit: BPF prog-id=22 op=LOAD Jun 25 16:27:30.057000 audit: BPF prog-id=23 op=LOAD Jun 25 16:27:30.063284 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:27:30.128208 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:27:30.138268 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:27:30.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.139730 (udev-worker)[1539]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:27:30.181029 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1537) Jun 25 16:27:30.255907 systemd-networkd[1531]: lo: Link UP Jun 25 16:27:30.256549 systemd-networkd[1531]: lo: Gained carrier Jun 25 16:27:30.257277 systemd-networkd[1531]: Enumeration completed Jun 25 16:27:30.258297 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:27:30.258602 systemd-networkd[1531]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:27:30.258677 systemd-networkd[1531]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:27:30.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.266904 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:27:30.263646 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:27:30.268783 systemd-networkd[1531]: eth0: Link UP Jun 25 16:27:30.269063 systemd-networkd[1531]: eth0: Gained carrier Jun 25 16:27:30.269569 systemd-networkd[1531]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:27:30.280203 systemd-networkd[1531]: eth0: DHCPv4 address 172.31.18.172/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 16:27:30.289037 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:27:30.297002 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jun 25 16:27:30.301556 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:27:30.301633 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jun 25 16:27:30.306842 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1536) Jun 25 16:27:30.334013 kernel: ACPI: button: Sleep Button [SLPF] Jun 25 16:27:30.419010 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jun 25 16:27:30.429004 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:27:30.490906 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 16:27:30.577631 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:27:30.581877 kernel: kauditd_printk_skb: 101 callbacks suppressed Jun 25 16:27:30.581925 kernel: audit: type=1130 audit(1719332850.577:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.583297 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:27:30.609348 lvm[1639]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:27:30.640565 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:27:30.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.641970 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:27:30.644878 kernel: audit: type=1130 audit(1719332850.640:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.653303 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:27:30.660329 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:27:30.695612 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:27:30.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.697604 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:27:30.700003 kernel: audit: type=1130 audit(1719332850.696:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.700831 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:27:30.700872 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:27:30.702175 systemd[1]: Reached target machines.target - Containers. Jun 25 16:27:30.719299 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:27:30.721901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:27:30.722457 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:30.724642 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:27:30.728683 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:27:30.732563 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:27:30.747233 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:27:30.750878 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1642 (bootctl) Jun 25 16:27:30.758198 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:27:30.767003 kernel: loop0: detected capacity change from 0 to 80584 Jun 25 16:27:30.789454 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:27:30.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.793115 kernel: audit: type=1130 audit(1719332850.788:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:30.846270 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:27:30.876054 kernel: loop1: detected capacity change from 0 to 139360 Jun 25 16:27:30.983007 kernel: loop2: detected capacity change from 0 to 60984 Jun 25 16:27:30.988428 systemd-fsck[1651]: fsck.fat 4.2 (2021-01-31) Jun 25 16:27:30.988428 systemd-fsck[1651]: /dev/nvme0n1p1: 808 files, 120378/258078 clusters Jun 25 16:27:31.008559 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:27:31.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:31.024247 kernel: audit: type=1130 audit(1719332851.014:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:31.029384 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:27:31.072228 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:27:31.079045 kernel: loop3: detected capacity change from 0 to 211296 Jun 25 16:27:31.144833 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:27:31.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:31.149034 kernel: audit: type=1130 audit(1719332851.145:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:31.337011 kernel: loop4: detected capacity change from 0 to 80584 Jun 25 16:27:31.375030 kernel: loop5: detected capacity change from 0 to 139360 Jun 25 16:27:31.424012 kernel: loop6: detected capacity change from 0 to 60984 Jun 25 16:27:31.459022 kernel: loop7: detected capacity change from 0 to 211296 Jun 25 16:27:31.498363 (sd-sysext)[1668]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 25 16:27:31.500476 (sd-sysext)[1668]: Merged extensions into '/usr'. Jun 25 16:27:31.502906 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:27:31.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:31.508000 kernel: audit: type=1130 audit(1719332851.503:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:31.509242 systemd[1]: Starting ensure-sysext.service... Jun 25 16:27:31.511736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:27:31.541116 systemd-tmpfiles[1670]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:27:31.548869 systemd-tmpfiles[1670]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:27:31.549894 systemd-tmpfiles[1670]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:27:31.550967 systemd[1]: Reloading. Jun 25 16:27:31.555503 systemd-tmpfiles[1670]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:27:31.854401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:27:31.912961 ldconfig[1641]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:27:31.951084 systemd-networkd[1531]: eth0: Gained IPv6LL Jun 25 16:27:31.949000 audit: BPF prog-id=24 op=LOAD Jun 25 16:27:31.951000 audit: BPF prog-id=25 op=LOAD Jun 25 16:27:31.953268 kernel: audit: type=1334 audit(1719332851.949:150): prog-id=24 op=LOAD Jun 25 16:27:31.953328 kernel: audit: type=1334 audit(1719332851.951:151): prog-id=25 op=LOAD Jun 25 16:27:31.953361 kernel: audit: type=1334 audit(1719332851.952:152): prog-id=18 op=UNLOAD Jun 25 16:27:31.952000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:27:31.952000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:27:31.953000 audit: BPF prog-id=26 op=LOAD Jun 25 16:27:31.953000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:27:31.953000 audit: BPF prog-id=27 op=LOAD Jun 25 16:27:31.953000 audit: BPF prog-id=28 op=LOAD Jun 25 16:27:31.953000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:27:31.953000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:27:31.955000 audit: BPF prog-id=29 op=LOAD Jun 25 16:27:31.955000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:27:31.957000 audit: BPF prog-id=30 op=LOAD Jun 25 16:27:31.957000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:27:31.958000 audit: BPF prog-id=31 op=LOAD Jun 25 16:27:31.958000 audit: BPF prog-id=32 op=LOAD Jun 25 16:27:31.958000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:27:31.958000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:27:31.963477 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:27:31.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:31.965318 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:27:31.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:31.970745 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:27:31.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:31.983219 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:27:31.995000 audit: BPF prog-id=33 op=LOAD Jun 25 16:27:32.000000 audit: BPF prog-id=34 op=LOAD Jun 25 16:27:31.989376 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:27:31.994553 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:27:31.999140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:27:32.009334 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:27:32.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.013473 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:27:32.024258 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:27:32.029782 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:27:32.064000 audit[1748]: SYSTEM_BOOT pid=1748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.069924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:32.070435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:27:32.080145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:27:32.084526 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:27:32.090755 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:27:32.092779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:27:32.093100 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:32.093371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:32.102316 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:27:32.102471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:27:32.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.108338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:27:32.108617 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:27:32.111851 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:27:32.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.114045 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:27:32.114209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:27:32.117776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:27:32.118073 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:27:32.121329 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:32.121757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:27:32.129433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:27:32.133051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:27:32.143830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:27:32.145499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:27:32.145732 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:32.146096 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:32.151237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:32.151722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:27:32.158901 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:27:32.160373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:27:32.160590 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:32.160823 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:32.166168 systemd[1]: Finished ensure-sysext.service. Jun 25 16:27:32.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.171795 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:27:32.172068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:27:32.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.182967 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:27:32.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.194449 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:27:32.197299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:27:32.197508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:27:32.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.199162 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:27:32.199355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:27:32.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.201422 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:27:32.201486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:27:32.215176 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:27:32.216847 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:27:32.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.218271 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:27:32.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.221414 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:27:32.222826 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:27:32.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.227255 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:27:32.227435 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:27:32.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:32.251762 augenrules[1768]: No rules Jun 25 16:27:32.249000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:27:32.249000 audit[1768]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda81d54b0 a2=420 a3=0 items=0 ppid=1739 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.249000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:27:32.252674 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:27:32.253539 systemd-resolved[1743]: Positive Trust Anchors: Jun 25 16:27:32.253807 systemd-resolved[1743]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:27:32.253910 systemd-resolved[1743]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:27:32.259823 systemd-resolved[1743]: Defaulting to hostname 'linux'. Jun 25 16:27:32.261861 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:27:32.263200 systemd[1]: Reached target network.target - Network. Jun 25 16:27:32.264203 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:27:32.265263 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:27:32.266411 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:27:32.267618 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:27:32.269020 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:27:32.270509 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:27:32.271830 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:27:32.272995 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:27:32.274276 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:27:32.274314 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:27:32.275391 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:27:32.277008 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:27:32.279511 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:27:32.284868 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:27:32.287755 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:32.288546 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:27:32.291301 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:27:32.292511 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:27:32.293525 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:27:32.293546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:27:32.295149 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:27:32.298314 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 16:27:32.301715 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:27:32.305866 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:27:32.310645 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:27:32.311830 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:27:32.314808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:27:32.317539 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:27:32.322929 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:27:32.331882 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:27:32.350777 jq[1778]: false Jun 25 16:27:32.345410 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 16:27:32.350976 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:27:32.357740 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:27:32.364494 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:27:32.365957 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:32.366049 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:27:32.366711 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:27:32.368208 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:27:32.372003 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:27:32.380311 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:27:32.380730 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:27:32.394358 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:27:32.394646 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:27:32.404851 extend-filesystems[1779]: Found loop4 Jun 25 16:27:32.406153 extend-filesystems[1779]: Found loop5 Jun 25 16:27:32.407177 extend-filesystems[1779]: Found loop6 Jun 25 16:27:32.408171 extend-filesystems[1779]: Found loop7 Jun 25 16:27:32.409014 extend-filesystems[1779]: Found nvme0n1 Jun 25 16:27:32.409935 extend-filesystems[1779]: Found nvme0n1p1 Jun 25 16:27:32.410794 extend-filesystems[1779]: Found nvme0n1p2 Jun 25 16:27:32.412110 extend-filesystems[1779]: Found nvme0n1p3 Jun 25 16:27:32.413005 extend-filesystems[1779]: Found usr Jun 25 16:27:32.414058 extend-filesystems[1779]: Found nvme0n1p4 Jun 25 16:27:32.414902 extend-filesystems[1779]: Found nvme0n1p6 Jun 25 16:27:32.416039 extend-filesystems[1779]: Found nvme0n1p7 Jun 25 16:27:32.417759 extend-filesystems[1779]: Found nvme0n1p9 Jun 25 16:27:32.419034 extend-filesystems[1779]: Checking size of /dev/nvme0n1p9 Jun 25 16:27:33.091353 systemd-timesyncd[1744]: Contacted time server 23.150.40.242:123 (0.flatcar.pool.ntp.org). Jun 25 16:27:33.091496 systemd-timesyncd[1744]: Initial clock synchronization to Tue 2024-06-25 16:27:33.090921 UTC. Jun 25 16:27:33.113559 jq[1792]: true Jun 25 16:27:33.149027 systemd-resolved[1743]: Clock change detected. Flushing caches. Jun 25 16:27:33.162810 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 16:27:33.172450 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 25 16:27:33.179071 jq[1814]: true Jun 25 16:27:33.179817 tar[1798]: linux-amd64/helm Jun 25 16:27:33.187014 update_engine[1791]: I0625 16:27:33.186945 1791 main.cc:92] Flatcar Update Engine starting Jun 25 16:27:33.212944 extend-filesystems[1779]: Resized partition /dev/nvme0n1p9 Jun 25 16:27:33.221938 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:27:33.238735 dbus-daemon[1777]: [system] SELinux support is enabled Jun 25 16:27:33.238985 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:27:33.259597 extend-filesystems[1833]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:27:33.265492 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 25 16:27:33.247151 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:27:33.247228 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:27:33.249353 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:27:33.249384 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:27:33.266273 dbus-daemon[1777]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1531 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 16:27:33.275695 update_engine[1791]: I0625 16:27:33.269141 1791 update_check_scheduler.cc:74] Next update check in 3m40s Jun 25 16:27:33.269691 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:27:33.267394 dbus-daemon[1777]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 16:27:33.279489 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 16:27:33.284478 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:27:33.290651 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:27:33.290900 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:27:33.500224 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1539) Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: Initializing new seelog logger Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: New Seelog Logger Creation Complete Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: 2024/06/25 16:27:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: 2024/06/25 16:27:33 processing appconfig overrides Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: 2024/06/25 16:27:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: 2024/06/25 16:27:33 processing appconfig overrides Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: 2024/06/25 16:27:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: 2024/06/25 16:27:33 processing appconfig overrides Jun 25 16:27:33.630572 amazon-ssm-agent[1816]: 2024-06-25 16:27:33 INFO Proxy environment variables: Jun 25 16:27:33.621011 systemd-logind[1790]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:27:33.621039 systemd-logind[1790]: Watching system buttons on /dev/input/event2 (Sleep Button) Jun 25 16:27:33.621066 systemd-logind[1790]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:27:33.622253 systemd-logind[1790]: New seat seat0. Jun 25 16:27:33.638002 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:27:33.651179 amazon-ssm-agent[1816]: 2024/06/25 16:27:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:27:33.651179 amazon-ssm-agent[1816]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:27:33.651179 amazon-ssm-agent[1816]: 2024/06/25 16:27:33 processing appconfig overrides Jun 25 16:27:33.654207 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 25 16:27:33.661408 dbus-daemon[1777]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 16:27:33.661703 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 16:27:33.664426 dbus-daemon[1777]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1835 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 16:27:33.672691 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 16:27:33.699415 bash[1846]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:27:33.699617 extend-filesystems[1833]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 25 16:27:33.699617 extend-filesystems[1833]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 16:27:33.699617 extend-filesystems[1833]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 25 16:27:33.698419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:27:33.737302 extend-filesystems[1779]: Resized filesystem in /dev/nvme0n1p9 Jun 25 16:27:33.739404 amazon-ssm-agent[1816]: 2024-06-25 16:27:33 INFO https_proxy: Jun 25 16:27:33.700703 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:27:33.700999 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:27:33.713905 systemd[1]: Starting sshkeys.service... Jun 25 16:27:33.769783 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 16:27:33.778949 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 16:27:33.791701 polkitd[1880]: Started polkitd version 121 Jun 25 16:27:33.814742 polkitd[1880]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 16:27:33.815034 polkitd[1880]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 16:27:33.815852 polkitd[1880]: Finished loading, compiling and executing 2 rules Jun 25 16:27:33.830925 dbus-daemon[1777]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 16:27:33.831256 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 16:27:33.834559 polkitd[1880]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 16:27:33.835208 amazon-ssm-agent[1816]: 2024-06-25 16:27:33 INFO http_proxy: Jun 25 16:27:33.924409 locksmithd[1839]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:27:33.925295 systemd-hostnamed[1835]: Hostname set to (transient) Jun 25 16:27:33.926022 systemd-resolved[1743]: System hostname changed to 'ip-172-31-18-172'. Jun 25 16:27:33.936127 amazon-ssm-agent[1816]: 2024-06-25 16:27:33 INFO no_proxy: Jun 25 16:27:34.020365 coreos-metadata[1776]: Jun 25 16:27:34.020 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 16:27:34.022671 coreos-metadata[1776]: Jun 25 16:27:34.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 25 16:27:34.023570 coreos-metadata[1776]: Jun 25 16:27:34.023 INFO Fetch successful Jun 25 16:27:34.023733 coreos-metadata[1776]: Jun 25 16:27:34.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 25 16:27:34.025567 coreos-metadata[1776]: Jun 25 16:27:34.025 INFO Fetch successful Jun 25 16:27:34.025641 coreos-metadata[1776]: Jun 25 16:27:34.025 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 25 16:27:34.027590 coreos-metadata[1776]: Jun 25 16:27:34.027 INFO Fetch successful Jun 25 16:27:34.027664 coreos-metadata[1776]: Jun 25 16:27:34.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 25 16:27:34.028799 coreos-metadata[1776]: Jun 25 16:27:34.028 INFO Fetch successful Jun 25 16:27:34.028872 coreos-metadata[1776]: Jun 25 16:27:34.028 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 25 16:27:34.029634 coreos-metadata[1776]: Jun 25 16:27:34.029 INFO Fetch failed with 404: resource not found Jun 25 16:27:34.030093 coreos-metadata[1776]: Jun 25 16:27:34.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 25 16:27:34.031601 coreos-metadata[1776]: Jun 25 16:27:34.031 INFO Fetch successful Jun 25 16:27:34.031674 coreos-metadata[1776]: Jun 25 16:27:34.031 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 25 16:27:34.033617 coreos-metadata[1776]: Jun 25 16:27:34.033 INFO Fetch successful Jun 25 16:27:34.033719 coreos-metadata[1776]: Jun 25 16:27:34.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 25 16:27:34.034540 coreos-metadata[1776]: Jun 25 16:27:34.034 INFO Fetch successful Jun 25 16:27:34.034674 coreos-metadata[1776]: Jun 25 16:27:34.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 25 16:27:34.035376 coreos-metadata[1776]: Jun 25 16:27:34.035 INFO Fetch successful Jun 25 16:27:34.035448 coreos-metadata[1776]: Jun 25 16:27:34.035 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 25 16:27:34.036179 coreos-metadata[1776]: Jun 25 16:27:34.036 INFO Fetch successful Jun 25 16:27:34.038511 amazon-ssm-agent[1816]: 2024-06-25 16:27:33 INFO Checking if agent identity type OnPrem can be assumed Jun 25 16:27:34.062951 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 16:27:34.065044 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:27:34.156468 amazon-ssm-agent[1816]: 2024-06-25 16:27:33 INFO Checking if agent identity type EC2 can be assumed Jun 25 16:27:34.269003 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO Agent will take identity from EC2 Jun 25 16:27:34.318710 coreos-metadata[1901]: Jun 25 16:27:34.303 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 16:27:34.330772 coreos-metadata[1901]: Jun 25 16:27:34.330 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 25 16:27:34.331809 containerd[1802]: time="2024-06-25T16:27:34.331702365Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:27:34.334368 coreos-metadata[1901]: Jun 25 16:27:34.334 INFO Fetch successful Jun 25 16:27:34.334505 coreos-metadata[1901]: Jun 25 16:27:34.334 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 16:27:34.337861 coreos-metadata[1901]: Jun 25 16:27:34.337 INFO Fetch successful Jun 25 16:27:34.341799 unknown[1901]: wrote ssh authorized keys file for user: core Jun 25 16:27:34.370911 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 16:27:34.377517 update-ssh-keys[1956]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:27:34.378671 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 16:27:34.384351 systemd[1]: Finished sshkeys.service. Jun 25 16:27:34.471466 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 16:27:34.575449 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 16:27:34.609343 containerd[1802]: time="2024-06-25T16:27:34.609238881Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:27:34.609479 containerd[1802]: time="2024-06-25T16:27:34.609351559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:34.619558 containerd[1802]: time="2024-06-25T16:27:34.618639554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:27:34.619558 containerd[1802]: time="2024-06-25T16:27:34.618699314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:34.619558 containerd[1802]: time="2024-06-25T16:27:34.619010121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:27:34.619558 containerd[1802]: time="2024-06-25T16:27:34.619036089Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:27:34.619558 containerd[1802]: time="2024-06-25T16:27:34.619140620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:34.619558 containerd[1802]: time="2024-06-25T16:27:34.619231204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:27:34.619558 containerd[1802]: time="2024-06-25T16:27:34.619252802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:34.619558 containerd[1802]: time="2024-06-25T16:27:34.619332356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:34.619938 containerd[1802]: time="2024-06-25T16:27:34.619580222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:34.619938 containerd[1802]: time="2024-06-25T16:27:34.619603265Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:27:34.619938 containerd[1802]: time="2024-06-25T16:27:34.619616027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:34.632734 containerd[1802]: time="2024-06-25T16:27:34.632630525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:27:34.632734 containerd[1802]: time="2024-06-25T16:27:34.632730340Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:27:34.633207 containerd[1802]: time="2024-06-25T16:27:34.632920902Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:27:34.633207 containerd[1802]: time="2024-06-25T16:27:34.633084378Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.651435767Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.651496652Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.651678149Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.651795676Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.651852592Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.651873673Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.651893569Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.652471873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.652538928Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.652561437Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.652617118Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.652698275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.652728424Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:27:34.653324 containerd[1802]: time="2024-06-25T16:27:34.652749266Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.652768260Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.652788496Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.652811104Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.652837388Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.652852673Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.653004414Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.653644603Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.653683267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.653704955Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.653739535Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.653817168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.653837247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.653856214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.653934 containerd[1802]: time="2024-06-25T16:27:34.653874190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.653897677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.653920253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.653938961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.653957212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.653977066Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.654131675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.654161578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.654180925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.654329274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.654349360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.654370229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.654389148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.654472 containerd[1802]: time="2024-06-25T16:27:34.654409122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:27:34.655423 containerd[1802]: time="2024-06-25T16:27:34.655074174Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:27:34.655423 containerd[1802]: time="2024-06-25T16:27:34.655169363Z" level=info msg="Connect containerd service" Jun 25 16:27:34.655423 containerd[1802]: time="2024-06-25T16:27:34.655228583Z" level=info msg="using legacy CRI server" Jun 25 16:27:34.655423 containerd[1802]: time="2024-06-25T16:27:34.655240343Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:27:34.655423 containerd[1802]: time="2024-06-25T16:27:34.655382240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:27:34.656216 containerd[1802]: time="2024-06-25T16:27:34.656169139Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:27:34.656350 containerd[1802]: time="2024-06-25T16:27:34.656246763Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:27:34.656350 containerd[1802]: time="2024-06-25T16:27:34.656324125Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:27:34.656350 containerd[1802]: time="2024-06-25T16:27:34.656342763Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:27:34.656477 containerd[1802]: time="2024-06-25T16:27:34.656357519Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:27:34.656835 containerd[1802]: time="2024-06-25T16:27:34.656809393Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:27:34.656901 containerd[1802]: time="2024-06-25T16:27:34.656873225Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:27:34.656981 containerd[1802]: time="2024-06-25T16:27:34.656949829Z" level=info msg="Start subscribing containerd event" Jun 25 16:27:34.657028 containerd[1802]: time="2024-06-25T16:27:34.657000836Z" level=info msg="Start recovering state" Jun 25 16:27:34.657224 containerd[1802]: time="2024-06-25T16:27:34.657122148Z" level=info msg="Start event monitor" Jun 25 16:27:34.657280 containerd[1802]: time="2024-06-25T16:27:34.657234389Z" level=info msg="Start snapshots syncer" Jun 25 16:27:34.657280 containerd[1802]: time="2024-06-25T16:27:34.657251277Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:27:34.657280 containerd[1802]: time="2024-06-25T16:27:34.657263767Z" level=info msg="Start streaming server" Jun 25 16:27:34.657498 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:27:34.658447 containerd[1802]: time="2024-06-25T16:27:34.657641237Z" level=info msg="containerd successfully booted in 0.346480s" Jun 25 16:27:34.682424 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 25 16:27:34.797894 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jun 25 16:27:34.898366 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO [amazon-ssm-agent] Starting Core Agent Jun 25 16:27:34.999979 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 25 16:27:35.100353 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO [Registrar] Starting registrar module Jun 25 16:27:35.200632 amazon-ssm-agent[1816]: 2024-06-25 16:27:34 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 25 16:27:35.207141 sshd_keygen[1813]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:27:35.288791 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:27:35.293698 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:27:35.305997 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:27:35.306283 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:27:35.312792 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:27:35.340390 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:27:35.347018 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:27:35.350626 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:27:35.352338 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:27:35.491294 amazon-ssm-agent[1816]: 2024-06-25 16:27:35 INFO [EC2Identity] EC2 registration was successful. Jun 25 16:27:35.519002 amazon-ssm-agent[1816]: 2024-06-25 16:27:35 INFO [CredentialRefresher] credentialRefresher has started Jun 25 16:27:35.519002 amazon-ssm-agent[1816]: 2024-06-25 16:27:35 INFO [CredentialRefresher] Starting credentials refresher loop Jun 25 16:27:35.519002 amazon-ssm-agent[1816]: 2024-06-25 16:27:35 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 25 16:27:35.542835 tar[1798]: linux-amd64/LICENSE Jun 25 16:27:35.543992 tar[1798]: linux-amd64/README.md Jun 25 16:27:35.554090 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:27:35.591528 amazon-ssm-agent[1816]: 2024-06-25 16:27:35 INFO [CredentialRefresher] Next credential rotation will be in 31.016656592233332 minutes Jun 25 16:27:35.719573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:35.722077 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:27:35.726311 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:27:35.751445 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:27:35.751666 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:27:35.755935 systemd[1]: Startup finished in 690ms (kernel) + 6.723s (initrd) + 7.688s (userspace) = 15.102s. Jun 25 16:27:36.536080 amazon-ssm-agent[1816]: 2024-06-25 16:27:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 25 16:27:36.637863 amazon-ssm-agent[1816]: 2024-06-25 16:27:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2008) started Jun 25 16:27:36.743394 amazon-ssm-agent[1816]: 2024-06-25 16:27:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 25 16:27:36.930230 kubelet[2000]: E0625 16:27:36.930061 2000 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:27:36.932566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:27:36.932754 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:27:36.933165 systemd[1]: kubelet.service: Consumed 1.077s CPU time. Jun 25 16:27:41.223781 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:27:41.234765 systemd[1]: Started sshd@0-172.31.18.172:22-139.178.89.65:55226.service - OpenSSH per-connection server daemon (139.178.89.65:55226). Jun 25 16:27:41.405421 sshd[2019]: Accepted publickey for core from 139.178.89.65 port 55226 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:27:41.408503 sshd[2019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:41.422273 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:27:41.429610 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:27:41.436318 systemd-logind[1790]: New session 1 of user core. Jun 25 16:27:41.448333 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:27:41.454622 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:27:41.458174 (systemd)[2022]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:41.585810 systemd[2022]: Queued start job for default target default.target. Jun 25 16:27:41.595800 systemd[2022]: Reached target paths.target - Paths. Jun 25 16:27:41.595836 systemd[2022]: Reached target sockets.target - Sockets. Jun 25 16:27:41.595853 systemd[2022]: Reached target timers.target - Timers. Jun 25 16:27:41.595870 systemd[2022]: Reached target basic.target - Basic System. Jun 25 16:27:41.595929 systemd[2022]: Reached target default.target - Main User Target. Jun 25 16:27:41.595970 systemd[2022]: Startup finished in 129ms. Jun 25 16:27:41.596555 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:27:41.604380 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:27:41.776767 systemd[1]: Started sshd@1-172.31.18.172:22-139.178.89.65:55236.service - OpenSSH per-connection server daemon (139.178.89.65:55236). Jun 25 16:27:41.936558 sshd[2031]: Accepted publickey for core from 139.178.89.65 port 55236 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:27:41.938601 sshd[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:41.951450 systemd-logind[1790]: New session 2 of user core. Jun 25 16:27:41.962458 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:27:42.087749 sshd[2031]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:42.091328 systemd[1]: sshd@1-172.31.18.172:22-139.178.89.65:55236.service: Deactivated successfully. Jun 25 16:27:42.092686 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:27:42.093420 systemd-logind[1790]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:27:42.094365 systemd-logind[1790]: Removed session 2. Jun 25 16:27:42.141784 systemd[1]: Started sshd@2-172.31.18.172:22-139.178.89.65:55248.service - OpenSSH per-connection server daemon (139.178.89.65:55248). Jun 25 16:27:42.300652 sshd[2037]: Accepted publickey for core from 139.178.89.65 port 55248 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:27:42.303054 sshd[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:42.308622 systemd-logind[1790]: New session 3 of user core. Jun 25 16:27:42.314441 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:27:42.432259 sshd[2037]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:42.437252 systemd[1]: sshd@2-172.31.18.172:22-139.178.89.65:55248.service: Deactivated successfully. Jun 25 16:27:42.438392 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:27:42.439824 systemd-logind[1790]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:27:42.441838 systemd-logind[1790]: Removed session 3. Jun 25 16:27:42.475019 systemd[1]: Started sshd@3-172.31.18.172:22-139.178.89.65:55250.service - OpenSSH per-connection server daemon (139.178.89.65:55250). Jun 25 16:27:42.632674 sshd[2043]: Accepted publickey for core from 139.178.89.65 port 55250 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:27:42.633665 sshd[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:42.639004 systemd-logind[1790]: New session 4 of user core. Jun 25 16:27:42.649464 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:27:42.771533 sshd[2043]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:42.775136 systemd[1]: sshd@3-172.31.18.172:22-139.178.89.65:55250.service: Deactivated successfully. Jun 25 16:27:42.776072 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:27:42.776870 systemd-logind[1790]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:27:42.778089 systemd-logind[1790]: Removed session 4. Jun 25 16:27:42.804856 systemd[1]: Started sshd@4-172.31.18.172:22-139.178.89.65:55266.service - OpenSSH per-connection server daemon (139.178.89.65:55266). Jun 25 16:27:42.968878 sshd[2049]: Accepted publickey for core from 139.178.89.65 port 55266 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:27:42.970154 sshd[2049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:42.976241 systemd-logind[1790]: New session 5 of user core. Jun 25 16:27:42.982434 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:27:43.097406 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:27:43.097835 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:27:43.114557 sudo[2052]: pam_unix(sudo:session): session closed for user root Jun 25 16:27:43.137422 sshd[2049]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:43.141338 systemd[1]: sshd@4-172.31.18.172:22-139.178.89.65:55266.service: Deactivated successfully. Jun 25 16:27:43.142801 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:27:43.143575 systemd-logind[1790]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:27:43.144837 systemd-logind[1790]: Removed session 5. Jun 25 16:27:43.185784 systemd[1]: Started sshd@5-172.31.18.172:22-139.178.89.65:55280.service - OpenSSH per-connection server daemon (139.178.89.65:55280). Jun 25 16:27:43.343310 sshd[2056]: Accepted publickey for core from 139.178.89.65 port 55280 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:27:43.345418 sshd[2056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:43.351386 systemd-logind[1790]: New session 6 of user core. Jun 25 16:27:43.362450 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:27:43.468820 sudo[2060]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:27:43.469199 sudo[2060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:27:43.473923 sudo[2060]: pam_unix(sudo:session): session closed for user root Jun 25 16:27:43.481519 sudo[2059]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:27:43.482053 sudo[2059]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:27:43.502785 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:27:43.503000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:27:43.505262 kernel: kauditd_printk_skb: 45 callbacks suppressed Jun 25 16:27:43.505342 kernel: audit: type=1305 audit(1719332863.503:196): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:27:43.503000 audit[2063]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb1add6d0 a2=420 a3=0 items=0 ppid=1 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.509270 kernel: audit: type=1300 audit(1719332863.503:196): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb1add6d0 a2=420 a3=0 items=0 ppid=1 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.509345 kernel: audit: type=1327 audit(1719332863.503:196): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:27:43.503000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:27:43.510151 auditctl[2063]: No rules Jun 25 16:27:43.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.513590 kernel: audit: type=1131 audit(1719332863.509:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.510678 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:27:43.510865 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:27:43.514382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:27:43.546141 augenrules[2080]: No rules Jun 25 16:27:43.550235 kernel: audit: type=1130 audit(1719332863.546:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.548965 sudo[2059]: pam_unix(sudo:session): session closed for user root Jun 25 16:27:43.546968 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:27:43.546000 audit[2059]: USER_END pid=2059 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.546000 audit[2059]: CRED_DISP pid=2059 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.555334 kernel: audit: type=1106 audit(1719332863.546:199): pid=2059 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.555394 kernel: audit: type=1104 audit(1719332863.546:200): pid=2059 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.572437 sshd[2056]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:43.572000 audit[2056]: USER_END pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:43.573000 audit[2056]: CRED_DISP pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:43.577631 systemd[1]: sshd@5-172.31.18.172:22-139.178.89.65:55280.service: Deactivated successfully. Jun 25 16:27:43.578953 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:27:43.580463 kernel: audit: type=1106 audit(1719332863.572:201): pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:43.580509 kernel: audit: type=1104 audit(1719332863.573:202): pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:43.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.172:22-139.178.89.65:55280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.581652 systemd-logind[1790]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:27:43.583392 kernel: audit: type=1131 audit(1719332863.577:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.172:22-139.178.89.65:55280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.583323 systemd-logind[1790]: Removed session 6. Jun 25 16:27:43.609385 systemd[1]: Started sshd@6-172.31.18.172:22-139.178.89.65:55296.service - OpenSSH per-connection server daemon (139.178.89.65:55296). Jun 25 16:27:43.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.172:22-139.178.89.65:55296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.773000 audit[2086]: USER_ACCT pid=2086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:43.775226 sshd[2086]: Accepted publickey for core from 139.178.89.65 port 55296 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:27:43.775000 audit[2086]: CRED_ACQ pid=2086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:43.775000 audit[2086]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfe71d8f0 a2=3 a3=7f1aea1e9480 items=0 ppid=1 pid=2086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.775000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:43.776739 sshd[2086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:43.784391 systemd-logind[1790]: New session 7 of user core. Jun 25 16:27:43.786392 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:27:43.791000 audit[2086]: USER_START pid=2086 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:43.793000 audit[2088]: CRED_ACQ pid=2088 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:43.888000 audit[2089]: USER_ACCT pid=2089 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.889000 audit[2089]: CRED_REFR pid=2089 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:43.890129 sudo[2089]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:27:43.891110 sudo[2089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:27:43.892000 audit[2089]: USER_START pid=2089 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:44.132897 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:27:44.562334 dockerd[2098]: time="2024-06-25T16:27:44.562269669Z" level=info msg="Starting up" Jun 25 16:27:44.638693 systemd[1]: var-lib-docker-metacopy\x2dcheck2800104074-merged.mount: Deactivated successfully. Jun 25 16:27:44.665542 dockerd[2098]: time="2024-06-25T16:27:44.665491314Z" level=info msg="Loading containers: start." Jun 25 16:27:44.742000 audit[2130]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.742000 audit[2130]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffec7a87820 a2=0 a3=7fa3bc5f7e90 items=0 ppid=2098 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.742000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:27:44.753000 audit[2132]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.753000 audit[2132]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffec929e30 a2=0 a3=7f3cdb171e90 items=0 ppid=2098 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.753000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:27:44.763000 audit[2134]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.763000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffecfa5aa90 a2=0 a3=7f4afdc9de90 items=0 ppid=2098 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.763000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:27:44.773000 audit[2136]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.773000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd2efef1e0 a2=0 a3=7fac4d7cee90 items=0 ppid=2098 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.773000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:27:44.786000 audit[2138]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.786000 audit[2138]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffea1e72430 a2=0 a3=7f8461304e90 items=0 ppid=2098 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.786000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:27:44.793000 audit[2140]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.793000 audit[2140]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca418f380 a2=0 a3=7f726f20be90 items=0 ppid=2098 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.793000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:27:44.816000 audit[2142]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.816000 audit[2142]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8ed472e0 a2=0 a3=7fa3b9552e90 items=0 ppid=2098 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.816000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:27:44.819000 audit[2144]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.819000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffca9cce620 a2=0 a3=7f98929a5e90 items=0 ppid=2098 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.819000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:27:44.824000 audit[2146]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.824000 audit[2146]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcc7f12a20 a2=0 a3=7fd88c99be90 items=0 ppid=2098 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.824000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:44.847000 audit[2150]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.847000 audit[2150]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe20c80da0 a2=0 a3=7f59832ffe90 items=0 ppid=2098 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.847000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:44.849000 audit[2151]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:44.849000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd0ae2ec90 a2=0 a3=7f4a8af17e90 items=0 ppid=2098 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.849000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:44.861213 kernel: Initializing XFRM netlink socket Jun 25 16:27:44.898920 (udev-worker)[2110]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:27:45.021000 audit[2159]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.021000 audit[2159]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffffed56a30 a2=0 a3=7f5565706e90 items=0 ppid=2098 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.021000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:27:45.035000 audit[2162]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.035000 audit[2162]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd9ae1c3b0 a2=0 a3=7fc34e8b6e90 items=0 ppid=2098 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.035000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:27:45.041000 audit[2166]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.041000 audit[2166]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffabbfb3d0 a2=0 a3=7f8748d50e90 items=0 ppid=2098 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.041000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:27:45.043000 audit[2168]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.043000 audit[2168]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe71998200 a2=0 a3=7fccb5dc3e90 items=0 ppid=2098 pid=2168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.043000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:27:45.046000 audit[2170]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.046000 audit[2170]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffcbaf5d730 a2=0 a3=7f4d6e62be90 items=0 ppid=2098 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.046000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:27:45.050000 audit[2172]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.050000 audit[2172]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff6d94e750 a2=0 a3=7fb38598fe90 items=0 ppid=2098 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.050000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:27:45.052000 audit[2174]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.052000 audit[2174]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff0d009ff0 a2=0 a3=7fc5f1309e90 items=0 ppid=2098 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.052000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:27:45.062000 audit[2177]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.062000 audit[2177]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffcf2675360 a2=0 a3=7f8d61114e90 items=0 ppid=2098 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.062000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:27:45.065000 audit[2179]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.065000 audit[2179]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe4c7f7da0 a2=0 a3=7f035e5cee90 items=0 ppid=2098 pid=2179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.065000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:27:45.069000 audit[2181]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.069000 audit[2181]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffed4cff730 a2=0 a3=7f80ffa6ce90 items=0 ppid=2098 pid=2181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.069000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:27:45.072000 audit[2183]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.072000 audit[2183]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd4378b380 a2=0 a3=7f5757b48e90 items=0 ppid=2098 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.072000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:27:45.073744 systemd-networkd[1531]: docker0: Link UP Jun 25 16:27:45.089000 audit[2187]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.089000 audit[2187]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff39c016e0 a2=0 a3=7f33ce979e90 items=0 ppid=2098 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.089000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:45.091000 audit[2188]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:45.091000 audit[2188]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff58a5c7a0 a2=0 a3=7fef19869e90 items=0 ppid=2098 pid=2188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:45.091000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:45.092780 dockerd[2098]: time="2024-06-25T16:27:45.092747626Z" level=info msg="Loading containers: done." Jun 25 16:27:45.217680 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck421638146-merged.mount: Deactivated successfully. Jun 25 16:27:45.248110 dockerd[2098]: time="2024-06-25T16:27:45.247845648Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:27:45.248559 dockerd[2098]: time="2024-06-25T16:27:45.248526262Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:27:45.248700 dockerd[2098]: time="2024-06-25T16:27:45.248674683Z" level=info msg="Daemon has completed initialization" Jun 25 16:27:45.302580 dockerd[2098]: time="2024-06-25T16:27:45.301772726Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:27:45.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:45.301942 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:27:46.364820 containerd[1802]: time="2024-06-25T16:27:46.364772729Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 25 16:27:47.183795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:27:47.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:47.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:47.184049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:47.184128 systemd[1]: kubelet.service: Consumed 1.077s CPU time. Jun 25 16:27:47.191629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:27:47.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:47.822622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:47.888977 kubelet[2235]: E0625 16:27:47.888924 2235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:27:47.895451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:27:47.895739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:27:47.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:27:48.022947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77807216.mount: Deactivated successfully. Jun 25 16:27:50.557387 containerd[1802]: time="2024-06-25T16:27:50.557328375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:50.559238 containerd[1802]: time="2024-06-25T16:27:50.559161988Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jun 25 16:27:50.562009 containerd[1802]: time="2024-06-25T16:27:50.561965076Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:50.565539 containerd[1802]: time="2024-06-25T16:27:50.565498690Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:50.570035 containerd[1802]: time="2024-06-25T16:27:50.569992083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:50.571276 containerd[1802]: time="2024-06-25T16:27:50.571234534Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 4.206413144s" Jun 25 16:27:50.571435 containerd[1802]: time="2024-06-25T16:27:50.571410982Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jun 25 16:27:50.605777 containerd[1802]: time="2024-06-25T16:27:50.605370411Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 25 16:27:53.458506 containerd[1802]: time="2024-06-25T16:27:53.458452417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:53.466099 containerd[1802]: time="2024-06-25T16:27:53.466025861Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jun 25 16:27:53.468834 containerd[1802]: time="2024-06-25T16:27:53.468791749Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:53.472532 containerd[1802]: time="2024-06-25T16:27:53.472485627Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:53.475580 containerd[1802]: time="2024-06-25T16:27:53.475537232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:53.476772 containerd[1802]: time="2024-06-25T16:27:53.476723929Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.870706208s" Jun 25 16:27:53.476880 containerd[1802]: time="2024-06-25T16:27:53.476777871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jun 25 16:27:53.502214 containerd[1802]: time="2024-06-25T16:27:53.502164473Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 25 16:27:55.118046 containerd[1802]: time="2024-06-25T16:27:55.117988981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:55.131722 containerd[1802]: time="2024-06-25T16:27:55.131648942Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jun 25 16:27:55.134602 containerd[1802]: time="2024-06-25T16:27:55.134556624Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:55.137724 containerd[1802]: time="2024-06-25T16:27:55.137684535Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:55.140699 containerd[1802]: time="2024-06-25T16:27:55.140658369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:55.141807 containerd[1802]: time="2024-06-25T16:27:55.141761196Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.639541437s" Jun 25 16:27:55.141897 containerd[1802]: time="2024-06-25T16:27:55.141812688Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jun 25 16:27:55.166293 containerd[1802]: time="2024-06-25T16:27:55.166251487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 25 16:27:56.560449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2813700927.mount: Deactivated successfully. Jun 25 16:27:57.492820 containerd[1802]: time="2024-06-25T16:27:57.492762686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:57.530684 containerd[1802]: time="2024-06-25T16:27:57.530600350Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jun 25 16:27:57.565337 containerd[1802]: time="2024-06-25T16:27:57.565282351Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:57.601566 containerd[1802]: time="2024-06-25T16:27:57.601519177Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:57.614524 containerd[1802]: time="2024-06-25T16:27:57.614453706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:57.615631 containerd[1802]: time="2024-06-25T16:27:57.615585437Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 2.449117168s" Jun 25 16:27:57.615978 containerd[1802]: time="2024-06-25T16:27:57.615638207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jun 25 16:27:57.643924 containerd[1802]: time="2024-06-25T16:27:57.643893780Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 16:27:58.146717 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:27:58.150230 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:27:58.150382 kernel: audit: type=1130 audit(1719332878.145:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:58.150428 kernel: audit: type=1131 audit(1719332878.145:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:58.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:58.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:58.147050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:58.155639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:27:58.282572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357461250.mount: Deactivated successfully. Jun 25 16:27:58.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:58.679746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:58.682233 kernel: audit: type=1130 audit(1719332878.678:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:58.746099 kubelet[2338]: E0625 16:27:58.746050 2338 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:27:58.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:27:58.748437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:27:58.748571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:27:58.751528 kernel: audit: type=1131 audit(1719332878.747:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:27:59.883772 containerd[1802]: time="2024-06-25T16:27:59.883710829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:59.885435 containerd[1802]: time="2024-06-25T16:27:59.885378684Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 16:27:59.887644 containerd[1802]: time="2024-06-25T16:27:59.887604248Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:59.890770 containerd[1802]: time="2024-06-25T16:27:59.890712902Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:59.893642 containerd[1802]: time="2024-06-25T16:27:59.893602217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:59.895240 containerd[1802]: time="2024-06-25T16:27:59.895173639Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.251024487s" Jun 25 16:27:59.895360 containerd[1802]: time="2024-06-25T16:27:59.895247577Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 16:27:59.924395 containerd[1802]: time="2024-06-25T16:27:59.924352924Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:28:00.478603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897518000.mount: Deactivated successfully. Jun 25 16:28:00.498980 containerd[1802]: time="2024-06-25T16:28:00.498922598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:00.500754 containerd[1802]: time="2024-06-25T16:28:00.500603800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:28:00.503014 containerd[1802]: time="2024-06-25T16:28:00.502969430Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:00.505823 containerd[1802]: time="2024-06-25T16:28:00.505786276Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:00.508649 containerd[1802]: time="2024-06-25T16:28:00.508600970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:00.509985 containerd[1802]: time="2024-06-25T16:28:00.509935515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 585.535419ms" Jun 25 16:28:00.510491 containerd[1802]: time="2024-06-25T16:28:00.509990302Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:28:00.548282 containerd[1802]: time="2024-06-25T16:28:00.548214104Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:28:01.157855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584155446.mount: Deactivated successfully. Jun 25 16:28:03.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:03.959176 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 16:28:03.962528 kernel: audit: type=1131 audit(1719332883.958:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:03.980181 kernel: audit: type=1334 audit(1719332883.977:247): prog-id=40 op=UNLOAD Jun 25 16:28:03.980330 kernel: audit: type=1334 audit(1719332883.977:248): prog-id=39 op=UNLOAD Jun 25 16:28:03.980374 kernel: audit: type=1334 audit(1719332883.977:249): prog-id=38 op=UNLOAD Jun 25 16:28:03.977000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:28:03.977000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:28:03.977000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:28:04.339301 containerd[1802]: time="2024-06-25T16:28:04.339068827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:04.341222 containerd[1802]: time="2024-06-25T16:28:04.340838194Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 16:28:04.343486 containerd[1802]: time="2024-06-25T16:28:04.343412108Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:04.347141 containerd[1802]: time="2024-06-25T16:28:04.347084218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:04.350497 containerd[1802]: time="2024-06-25T16:28:04.350458671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:04.352028 containerd[1802]: time="2024-06-25T16:28:04.351980320Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.802804698s" Jun 25 16:28:04.352120 containerd[1802]: time="2024-06-25T16:28:04.352040056Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:28:08.765422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:28:08.784597 kernel: audit: type=1130 audit(1719332888.764:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:08.784655 kernel: audit: type=1131 audit(1719332888.764:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:08.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:08.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:08.765704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:08.786241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:08.905586 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 16:28:08.905931 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 16:28:08.906580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:08.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:28:08.910238 kernel: audit: type=1130 audit(1719332888.906:252): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:28:08.916002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:08.948452 systemd[1]: Reloading. Jun 25 16:28:09.248197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:28:09.351315 kernel: audit: type=1334 audit(1719332889.346:253): prog-id=41 op=LOAD Jun 25 16:28:09.351425 kernel: audit: type=1334 audit(1719332889.346:254): prog-id=42 op=LOAD Jun 25 16:28:09.351456 kernel: audit: type=1334 audit(1719332889.346:255): prog-id=24 op=UNLOAD Jun 25 16:28:09.351493 kernel: audit: type=1334 audit(1719332889.346:256): prog-id=25 op=UNLOAD Jun 25 16:28:09.346000 audit: BPF prog-id=41 op=LOAD Jun 25 16:28:09.346000 audit: BPF prog-id=42 op=LOAD Jun 25 16:28:09.346000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:28:09.346000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:28:09.361979 kernel: audit: type=1334 audit(1719332889.355:257): prog-id=43 op=LOAD Jun 25 16:28:09.362092 kernel: audit: type=1334 audit(1719332889.355:258): prog-id=26 op=UNLOAD Jun 25 16:28:09.362122 kernel: audit: type=1334 audit(1719332889.355:259): prog-id=44 op=LOAD Jun 25 16:28:09.362158 kernel: audit: type=1334 audit(1719332889.355:260): prog-id=45 op=LOAD Jun 25 16:28:09.362210 kernel: audit: type=1334 audit(1719332889.355:261): prog-id=27 op=UNLOAD Jun 25 16:28:09.362249 kernel: audit: type=1334 audit(1719332889.355:262): prog-id=28 op=UNLOAD Jun 25 16:28:09.355000 audit: BPF prog-id=43 op=LOAD Jun 25 16:28:09.355000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:28:09.355000 audit: BPF prog-id=44 op=LOAD Jun 25 16:28:09.355000 audit: BPF prog-id=45 op=LOAD Jun 25 16:28:09.355000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:28:09.355000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:28:09.358000 audit: BPF prog-id=46 op=LOAD Jun 25 16:28:09.358000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:28:09.358000 audit: BPF prog-id=47 op=LOAD Jun 25 16:28:09.358000 audit: BPF prog-id=48 op=LOAD Jun 25 16:28:09.358000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:28:09.358000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:28:09.359000 audit: BPF prog-id=49 op=LOAD Jun 25 16:28:09.359000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:28:09.361000 audit: BPF prog-id=50 op=LOAD Jun 25 16:28:09.361000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:28:09.361000 audit: BPF prog-id=51 op=LOAD Jun 25 16:28:09.361000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:28:09.363000 audit: BPF prog-id=52 op=LOAD Jun 25 16:28:09.363000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:28:09.364000 audit: BPF prog-id=53 op=LOAD Jun 25 16:28:09.364000 audit: BPF prog-id=54 op=LOAD Jun 25 16:28:09.364000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:28:09.364000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:28:09.412382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:09.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:09.420385 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:09.421285 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:28:09.421666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:09.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:09.426123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:10.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:10.476295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:10.552576 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:28:10.552576 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:28:10.552576 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:28:10.553341 kubelet[2580]: I0625 16:28:10.552683 2580 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:28:10.948824 kubelet[2580]: I0625 16:28:10.948784 2580 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 16:28:10.948824 kubelet[2580]: I0625 16:28:10.948822 2580 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:28:10.949111 kubelet[2580]: I0625 16:28:10.949089 2580 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 16:28:10.990738 kubelet[2580]: E0625 16:28:10.990676 2580 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.172:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:10.991127 kubelet[2580]: I0625 16:28:10.990676 2580 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:28:11.014658 kubelet[2580]: I0625 16:28:11.014605 2580 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:28:11.014971 kubelet[2580]: I0625 16:28:11.014948 2580 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:28:11.016873 kubelet[2580]: I0625 16:28:11.016835 2580 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:28:11.017711 kubelet[2580]: I0625 16:28:11.017681 2580 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:28:11.017801 kubelet[2580]: I0625 16:28:11.017714 2580 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:28:11.017851 kubelet[2580]: I0625 16:28:11.017843 2580 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:28:11.018060 kubelet[2580]: I0625 16:28:11.018040 2580 kubelet.go:396] "Attempting to sync node with API server" Jun 25 16:28:11.018145 kubelet[2580]: I0625 16:28:11.018066 2580 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:28:11.018145 kubelet[2580]: I0625 16:28:11.018103 2580 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:28:11.018145 kubelet[2580]: I0625 16:28:11.018123 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:28:11.020475 kubelet[2580]: W0625 16:28:11.020231 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.18.172:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:11.020475 kubelet[2580]: E0625 16:28:11.020380 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.172:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:11.020616 kubelet[2580]: W0625 16:28:11.020503 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.18.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-172&limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:11.020616 kubelet[2580]: E0625 16:28:11.020550 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-172&limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:11.020701 kubelet[2580]: I0625 16:28:11.020637 2580 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:28:11.027711 kubelet[2580]: I0625 16:28:11.027669 2580 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:28:11.030089 kubelet[2580]: W0625 16:28:11.030032 2580 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:28:11.031264 kubelet[2580]: I0625 16:28:11.031245 2580 server.go:1256] "Started kubelet" Jun 25 16:28:11.031607 kubelet[2580]: I0625 16:28:11.031579 2580 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:28:11.033398 kubelet[2580]: I0625 16:28:11.032731 2580 server.go:461] "Adding debug handlers to kubelet server" Jun 25 16:28:11.039667 kubelet[2580]: I0625 16:28:11.039469 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:28:11.042252 kubelet[2580]: I0625 16:28:11.042231 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:28:11.042600 kubelet[2580]: I0625 16:28:11.042586 2580 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:28:11.043000 audit[2590]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:11.043000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd83825d80 a2=0 a3=7f6b7df14e90 items=0 ppid=2580 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.043000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:28:11.047450 kubelet[2580]: E0625 16:28:11.046334 2580 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.172:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.172:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-172.17dc4c27ac8ed71d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-172,UID:ip-172-31-18-172,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-172,},FirstTimestamp:2024-06-25 16:28:11.031164701 +0000 UTC m=+0.546172280,LastTimestamp:2024-06-25 16:28:11.031164701 +0000 UTC m=+0.546172280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-172,}" Jun 25 16:28:11.050129 kubelet[2580]: I0625 16:28:11.050111 2580 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:28:11.050797 kubelet[2580]: I0625 16:28:11.050779 2580 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:28:11.050994 kubelet[2580]: I0625 16:28:11.050982 2580 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:28:11.050000 audit[2591]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:11.050000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe32f96590 a2=0 a3=7fa82e58ee90 items=0 ppid=2580 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.050000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:28:11.053864 kubelet[2580]: W0625 16:28:11.053814 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.18.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:11.054122 kubelet[2580]: E0625 16:28:11.054108 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:11.054378 kubelet[2580]: E0625 16:28:11.054364 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-172?timeout=10s\": dial tcp 172.31.18.172:6443: connect: connection refused" interval="200ms" Jun 25 16:28:11.057906 kubelet[2580]: I0625 16:28:11.057685 2580 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:28:11.057906 kubelet[2580]: I0625 16:28:11.057703 2580 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:28:11.057906 kubelet[2580]: I0625 16:28:11.057769 2580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:28:11.063098 kubelet[2580]: E0625 16:28:11.063075 2580 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:28:11.074000 audit[2594]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:11.074000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc35eb4630 a2=0 a3=7f3cd6a9be90 items=0 ppid=2580 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:28:11.083109 kubelet[2580]: I0625 16:28:11.083075 2580 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:28:11.083539 kubelet[2580]: I0625 16:28:11.083525 2580 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:28:11.083659 kubelet[2580]: I0625 16:28:11.083649 2580 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:28:11.087869 kubelet[2580]: I0625 16:28:11.087843 2580 policy_none.go:49] "None policy: Start" Jun 25 16:28:11.087000 audit[2598]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:11.087000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffebbb57d80 a2=0 a3=7f142f29de90 items=0 ppid=2580 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:28:11.088863 kubelet[2580]: I0625 16:28:11.088846 2580 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:28:11.088936 kubelet[2580]: I0625 16:28:11.088884 2580 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:28:11.101000 audit[2602]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:11.101000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc9a1c8b70 a2=0 a3=7f326c518e90 items=0 ppid=2580 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:28:11.102996 kubelet[2580]: I0625 16:28:11.102869 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:28:11.104000 audit[2603]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:11.104000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffccf30e20 a2=0 a3=7f3de0d4de90 items=0 ppid=2580 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:28:11.105000 audit[2604]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:11.107319 kubelet[2580]: I0625 16:28:11.107221 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:28:11.107319 kubelet[2580]: I0625 16:28:11.107251 2580 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:28:11.107319 kubelet[2580]: I0625 16:28:11.107275 2580 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 16:28:11.107453 kubelet[2580]: E0625 16:28:11.107331 2580 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:28:11.105000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe91d3f720 a2=0 a3=7f41563e1e90 items=0 ppid=2580 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:28:11.107000 audit[2606]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:11.107000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff367dde40 a2=0 a3=7f1ee2957e90 items=0 ppid=2580 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.107000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:28:11.110000 audit[2607]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:11.110000 audit[2608]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:11.110000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff20b8a900 a2=0 a3=7f6bbc859e90 items=0 ppid=2580 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.110000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:28:11.110000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdeaf48ca0 a2=0 a3=7f4c2d10ae90 items=0 ppid=2580 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:28:11.112000 audit[2609]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:11.112000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc669c58b0 a2=0 a3=7f263f6d8e90 items=0 ppid=2580 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.112000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:28:11.113000 audit[2610]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:11.113000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd04fe9400 a2=0 a3=7f4e65673e90 items=0 ppid=2580 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:11.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:28:11.115078 kubelet[2580]: W0625 16:28:11.115038 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.18.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:11.115169 kubelet[2580]: E0625 16:28:11.115081 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:11.118432 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:28:11.133683 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:28:11.137111 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:28:11.148991 kubelet[2580]: I0625 16:28:11.148955 2580 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:28:11.150650 kubelet[2580]: I0625 16:28:11.149814 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:28:11.152982 kubelet[2580]: I0625 16:28:11.152963 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-172" Jun 25 16:28:11.154053 kubelet[2580]: E0625 16:28:11.154035 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.172:6443/api/v1/nodes\": dial tcp 172.31.18.172:6443: connect: connection refused" node="ip-172-31-18-172" Jun 25 16:28:11.154289 kubelet[2580]: E0625 16:28:11.154274 2580 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-172\" not found" Jun 25 16:28:11.207944 kubelet[2580]: I0625 16:28:11.207819 2580 topology_manager.go:215] "Topology Admit Handler" podUID="6ddac5dedee0235c378b151d1454ecb7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-172" Jun 25 16:28:11.213643 kubelet[2580]: I0625 16:28:11.213612 2580 topology_manager.go:215] "Topology Admit Handler" podUID="e9d0135895eec771457df072d43819f2" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:11.215400 kubelet[2580]: I0625 16:28:11.215380 2580 topology_manager.go:215] "Topology Admit Handler" podUID="57edb8ee14f3ba9ce83379e5809bb44e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-172" Jun 25 16:28:11.221731 systemd[1]: Created slice kubepods-burstable-pod6ddac5dedee0235c378b151d1454ecb7.slice - libcontainer container kubepods-burstable-pod6ddac5dedee0235c378b151d1454ecb7.slice. Jun 25 16:28:11.235305 systemd[1]: Created slice kubepods-burstable-pode9d0135895eec771457df072d43819f2.slice - libcontainer container kubepods-burstable-pode9d0135895eec771457df072d43819f2.slice. Jun 25 16:28:11.248487 systemd[1]: Created slice kubepods-burstable-pod57edb8ee14f3ba9ce83379e5809bb44e.slice - libcontainer container kubepods-burstable-pod57edb8ee14f3ba9ce83379e5809bb44e.slice. Jun 25 16:28:11.252626 kubelet[2580]: I0625 16:28:11.252590 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ddac5dedee0235c378b151d1454ecb7-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-172\" (UID: \"6ddac5dedee0235c378b151d1454ecb7\") " pod="kube-system/kube-apiserver-ip-172-31-18-172" Jun 25 16:28:11.252769 kubelet[2580]: I0625 16:28:11.252650 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ddac5dedee0235c378b151d1454ecb7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-172\" (UID: \"6ddac5dedee0235c378b151d1454ecb7\") " pod="kube-system/kube-apiserver-ip-172-31-18-172" Jun 25 16:28:11.252769 kubelet[2580]: I0625 16:28:11.252680 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:11.252769 kubelet[2580]: I0625 16:28:11.252708 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:11.252769 kubelet[2580]: I0625 16:28:11.252736 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:11.252769 kubelet[2580]: I0625 16:28:11.252765 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:11.252996 kubelet[2580]: I0625 16:28:11.252793 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57edb8ee14f3ba9ce83379e5809bb44e-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-172\" (UID: \"57edb8ee14f3ba9ce83379e5809bb44e\") " pod="kube-system/kube-scheduler-ip-172-31-18-172" Jun 25 16:28:11.252996 kubelet[2580]: I0625 16:28:11.252826 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:11.252996 kubelet[2580]: I0625 16:28:11.252857 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ddac5dedee0235c378b151d1454ecb7-ca-certs\") pod \"kube-apiserver-ip-172-31-18-172\" (UID: \"6ddac5dedee0235c378b151d1454ecb7\") " pod="kube-system/kube-apiserver-ip-172-31-18-172" Jun 25 16:28:11.257607 kubelet[2580]: E0625 16:28:11.257567 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-172?timeout=10s\": dial tcp 172.31.18.172:6443: connect: connection refused" interval="400ms" Jun 25 16:28:11.356704 kubelet[2580]: I0625 16:28:11.356666 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-172" Jun 25 16:28:11.357061 kubelet[2580]: E0625 16:28:11.357025 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.172:6443/api/v1/nodes\": dial tcp 172.31.18.172:6443: connect: connection refused" node="ip-172-31-18-172" Jun 25 16:28:11.533826 containerd[1802]: time="2024-06-25T16:28:11.533774053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-172,Uid:6ddac5dedee0235c378b151d1454ecb7,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:11.539478 containerd[1802]: time="2024-06-25T16:28:11.539365344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-172,Uid:e9d0135895eec771457df072d43819f2,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:11.554800 containerd[1802]: time="2024-06-25T16:28:11.554758060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-172,Uid:57edb8ee14f3ba9ce83379e5809bb44e,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:11.658349 kubelet[2580]: E0625 16:28:11.658311 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-172?timeout=10s\": dial tcp 172.31.18.172:6443: connect: connection refused" interval="800ms" Jun 25 16:28:11.759198 kubelet[2580]: I0625 16:28:11.759153 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-172" Jun 25 16:28:11.759523 kubelet[2580]: E0625 16:28:11.759500 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.172:6443/api/v1/nodes\": dial tcp 172.31.18.172:6443: connect: connection refused" node="ip-172-31-18-172" Jun 25 16:28:12.011288 kubelet[2580]: W0625 16:28:12.011127 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.18.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:12.011288 kubelet[2580]: E0625 16:28:12.011203 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:12.066731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325832258.mount: Deactivated successfully. Jun 25 16:28:12.084694 containerd[1802]: time="2024-06-25T16:28:12.084638710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.086679 containerd[1802]: time="2024-06-25T16:28:12.086607080Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:28:12.088930 containerd[1802]: time="2024-06-25T16:28:12.088887767Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.095742 containerd[1802]: time="2024-06-25T16:28:12.095684722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:28:12.097627 containerd[1802]: time="2024-06-25T16:28:12.097586965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.099001 kubelet[2580]: W0625 16:28:12.098937 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.18.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-172&limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:12.099092 kubelet[2580]: E0625 16:28:12.099009 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-172&limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:12.099799 containerd[1802]: time="2024-06-25T16:28:12.099758384Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.101398 containerd[1802]: time="2024-06-25T16:28:12.101363141Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.104536 containerd[1802]: time="2024-06-25T16:28:12.104498790Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.105959 containerd[1802]: time="2024-06-25T16:28:12.105913214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:28:12.108131 containerd[1802]: time="2024-06-25T16:28:12.108086874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.110916 containerd[1802]: time="2024-06-25T16:28:12.110864938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 576.972657ms" Jun 25 16:28:12.112914 containerd[1802]: time="2024-06-25T16:28:12.112871559Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.114380 containerd[1802]: time="2024-06-25T16:28:12.114274920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.119051 containerd[1802]: time="2024-06-25T16:28:12.119004948Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.124466 containerd[1802]: time="2024-06-25T16:28:12.124426579Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.126103 containerd[1802]: time="2024-06-25T16:28:12.126058602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 571.194056ms" Jun 25 16:28:12.127227 containerd[1802]: time="2024-06-25T16:28:12.127178446Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:12.128039 containerd[1802]: time="2024-06-25T16:28:12.128004433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.460805ms" Jun 25 16:28:12.420277 containerd[1802]: time="2024-06-25T16:28:12.419849861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:12.420277 containerd[1802]: time="2024-06-25T16:28:12.419921272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:12.420277 containerd[1802]: time="2024-06-25T16:28:12.419949923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:12.420277 containerd[1802]: time="2024-06-25T16:28:12.419971910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:12.429381 containerd[1802]: time="2024-06-25T16:28:12.429061726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:12.429381 containerd[1802]: time="2024-06-25T16:28:12.429136212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:12.429381 containerd[1802]: time="2024-06-25T16:28:12.429160275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:12.429381 containerd[1802]: time="2024-06-25T16:28:12.429177471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:12.435689 containerd[1802]: time="2024-06-25T16:28:12.435383308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:12.435689 containerd[1802]: time="2024-06-25T16:28:12.435444794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:12.435689 containerd[1802]: time="2024-06-25T16:28:12.435473512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:12.435689 containerd[1802]: time="2024-06-25T16:28:12.435506023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:12.460005 kubelet[2580]: E0625 16:28:12.459960 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-172?timeout=10s\": dial tcp 172.31.18.172:6443: connect: connection refused" interval="1.6s" Jun 25 16:28:12.469467 systemd[1]: Started cri-containerd-184b38d340ed9490b2795f6c3e85adb17ed6d0db3258074a70493d1d9ae7bc3d.scope - libcontainer container 184b38d340ed9490b2795f6c3e85adb17ed6d0db3258074a70493d1d9ae7bc3d. Jun 25 16:28:12.480517 systemd[1]: Started cri-containerd-afd1007fa39577ff6a19b445f93da636e40fa660ad4f404e5ce250cf65ec2a2c.scope - libcontainer container afd1007fa39577ff6a19b445f93da636e40fa660ad4f404e5ce250cf65ec2a2c. Jun 25 16:28:12.481867 kubelet[2580]: W0625 16:28:12.481542 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.18.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:12.481867 kubelet[2580]: E0625 16:28:12.481607 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:12.500412 systemd[1]: Started cri-containerd-41a5d5ea8a30e361f51413e04658469834fe9dae11bf948c5eb319cbdd26beb9.scope - libcontainer container 41a5d5ea8a30e361f51413e04658469834fe9dae11bf948c5eb319cbdd26beb9. Jun 25 16:28:12.505000 audit: BPF prog-id=55 op=LOAD Jun 25 16:28:12.505000 audit: BPF prog-id=56 op=LOAD Jun 25 16:28:12.505000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2644 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346233386433343065643934393062323739356636633365383561 Jun 25 16:28:12.506000 audit: BPF prog-id=57 op=LOAD Jun 25 16:28:12.506000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2644 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346233386433343065643934393062323739356636633365383561 Jun 25 16:28:12.506000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:28:12.506000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:28:12.506000 audit: BPF prog-id=58 op=LOAD Jun 25 16:28:12.506000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2644 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346233386433343065643934393062323739356636633365383561 Jun 25 16:28:12.515000 audit: BPF prog-id=59 op=LOAD Jun 25 16:28:12.515000 audit: BPF prog-id=60 op=LOAD Jun 25 16:28:12.515000 audit[2658]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2632 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166643130303766613339353737666636613139623434356639336461 Jun 25 16:28:12.515000 audit: BPF prog-id=61 op=LOAD Jun 25 16:28:12.515000 audit[2658]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2632 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166643130303766613339353737666636613139623434356639336461 Jun 25 16:28:12.515000 audit: BPF prog-id=61 op=UNLOAD Jun 25 16:28:12.516000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:28:12.516000 audit: BPF prog-id=62 op=LOAD Jun 25 16:28:12.516000 audit[2658]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2632 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166643130303766613339353737666636613139623434356639336461 Jun 25 16:28:12.532741 kubelet[2580]: W0625 16:28:12.532591 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.18.172:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:12.532741 kubelet[2580]: E0625 16:28:12.532699 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.172:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:12.551000 audit: BPF prog-id=63 op=LOAD Jun 25 16:28:12.552000 audit: BPF prog-id=64 op=LOAD Jun 25 16:28:12.552000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2655 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431613564356561386133306533363166353134313365303436353834 Jun 25 16:28:12.552000 audit: BPF prog-id=65 op=LOAD Jun 25 16:28:12.552000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2655 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431613564356561386133306533363166353134313365303436353834 Jun 25 16:28:12.552000 audit: BPF prog-id=65 op=UNLOAD Jun 25 16:28:12.552000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:28:12.552000 audit: BPF prog-id=66 op=LOAD Jun 25 16:28:12.552000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2655 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431613564356561386133306533363166353134313365303436353834 Jun 25 16:28:12.562510 kubelet[2580]: I0625 16:28:12.561983 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-172" Jun 25 16:28:12.562510 kubelet[2580]: E0625 16:28:12.562454 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.172:6443/api/v1/nodes\": dial tcp 172.31.18.172:6443: connect: connection refused" node="ip-172-31-18-172" Jun 25 16:28:12.651957 containerd[1802]: time="2024-06-25T16:28:12.651887665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-172,Uid:6ddac5dedee0235c378b151d1454ecb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"184b38d340ed9490b2795f6c3e85adb17ed6d0db3258074a70493d1d9ae7bc3d\"" Jun 25 16:28:12.655680 containerd[1802]: time="2024-06-25T16:28:12.654381823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-172,Uid:57edb8ee14f3ba9ce83379e5809bb44e,Namespace:kube-system,Attempt:0,} returns sandbox id \"41a5d5ea8a30e361f51413e04658469834fe9dae11bf948c5eb319cbdd26beb9\"" Jun 25 16:28:12.659131 containerd[1802]: time="2024-06-25T16:28:12.658087622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-172,Uid:e9d0135895eec771457df072d43819f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"afd1007fa39577ff6a19b445f93da636e40fa660ad4f404e5ce250cf65ec2a2c\"" Jun 25 16:28:12.666587 containerd[1802]: time="2024-06-25T16:28:12.666525572Z" level=info msg="CreateContainer within sandbox \"41a5d5ea8a30e361f51413e04658469834fe9dae11bf948c5eb319cbdd26beb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:28:12.667235 containerd[1802]: time="2024-06-25T16:28:12.667177261Z" level=info msg="CreateContainer within sandbox \"afd1007fa39577ff6a19b445f93da636e40fa660ad4f404e5ce250cf65ec2a2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:28:12.668687 containerd[1802]: time="2024-06-25T16:28:12.668654664Z" level=info msg="CreateContainer within sandbox \"184b38d340ed9490b2795f6c3e85adb17ed6d0db3258074a70493d1d9ae7bc3d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:28:12.730590 containerd[1802]: time="2024-06-25T16:28:12.730454601Z" level=info msg="CreateContainer within sandbox \"41a5d5ea8a30e361f51413e04658469834fe9dae11bf948c5eb319cbdd26beb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8651460d77c0b919b12973b43cb74d25027547f6c39ef0dcf6d3fd43a612cb34\"" Jun 25 16:28:12.735005 containerd[1802]: time="2024-06-25T16:28:12.734956341Z" level=info msg="StartContainer for \"8651460d77c0b919b12973b43cb74d25027547f6c39ef0dcf6d3fd43a612cb34\"" Jun 25 16:28:12.747908 containerd[1802]: time="2024-06-25T16:28:12.747646098Z" level=info msg="CreateContainer within sandbox \"afd1007fa39577ff6a19b445f93da636e40fa660ad4f404e5ce250cf65ec2a2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8c14f11417c30d4775eb6aae3720d2ddd988f3513e2666e34dfb7b993d2a65d2\"" Jun 25 16:28:12.755542 containerd[1802]: time="2024-06-25T16:28:12.755498893Z" level=info msg="StartContainer for \"8c14f11417c30d4775eb6aae3720d2ddd988f3513e2666e34dfb7b993d2a65d2\"" Jun 25 16:28:12.757678 containerd[1802]: time="2024-06-25T16:28:12.757640910Z" level=info msg="CreateContainer within sandbox \"184b38d340ed9490b2795f6c3e85adb17ed6d0db3258074a70493d1d9ae7bc3d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3462a2f9be35d1ab6b9c7067644065e560664c6a41dc0e9239d66b169d288d7b\"" Jun 25 16:28:12.759497 containerd[1802]: time="2024-06-25T16:28:12.759456605Z" level=info msg="StartContainer for \"3462a2f9be35d1ab6b9c7067644065e560664c6a41dc0e9239d66b169d288d7b\"" Jun 25 16:28:12.818365 systemd[1]: Started cri-containerd-8651460d77c0b919b12973b43cb74d25027547f6c39ef0dcf6d3fd43a612cb34.scope - libcontainer container 8651460d77c0b919b12973b43cb74d25027547f6c39ef0dcf6d3fd43a612cb34. Jun 25 16:28:12.831493 systemd[1]: Started cri-containerd-8c14f11417c30d4775eb6aae3720d2ddd988f3513e2666e34dfb7b993d2a65d2.scope - libcontainer container 8c14f11417c30d4775eb6aae3720d2ddd988f3513e2666e34dfb7b993d2a65d2. Jun 25 16:28:12.841000 audit: BPF prog-id=67 op=LOAD Jun 25 16:28:12.842000 audit: BPF prog-id=68 op=LOAD Jun 25 16:28:12.842000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013b988 a2=78 a3=0 items=0 ppid=2655 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836353134363064373763306239313962313239373362343363623734 Jun 25 16:28:12.842000 audit: BPF prog-id=69 op=LOAD Jun 25 16:28:12.842000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00013b720 a2=78 a3=0 items=0 ppid=2655 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836353134363064373763306239313962313239373362343363623734 Jun 25 16:28:12.843000 audit: BPF prog-id=69 op=UNLOAD Jun 25 16:28:12.843000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:28:12.843000 audit: BPF prog-id=70 op=LOAD Jun 25 16:28:12.843000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013bbe0 a2=78 a3=0 items=0 ppid=2655 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836353134363064373763306239313962313239373362343363623734 Jun 25 16:28:12.863404 systemd[1]: Started cri-containerd-3462a2f9be35d1ab6b9c7067644065e560664c6a41dc0e9239d66b169d288d7b.scope - libcontainer container 3462a2f9be35d1ab6b9c7067644065e560664c6a41dc0e9239d66b169d288d7b. Jun 25 16:28:12.868000 audit: BPF prog-id=71 op=LOAD Jun 25 16:28:12.868000 audit: BPF prog-id=72 op=LOAD Jun 25 16:28:12.868000 audit[2772]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2632 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863313466313134313763333064343737356562366161653337323064 Jun 25 16:28:12.868000 audit: BPF prog-id=73 op=LOAD Jun 25 16:28:12.868000 audit[2772]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2632 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863313466313134313763333064343737356562366161653337323064 Jun 25 16:28:12.868000 audit: BPF prog-id=73 op=UNLOAD Jun 25 16:28:12.869000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:28:12.869000 audit: BPF prog-id=74 op=LOAD Jun 25 16:28:12.869000 audit[2772]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2632 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863313466313134313763333064343737356562366161653337323064 Jun 25 16:28:12.890000 audit: BPF prog-id=75 op=LOAD Jun 25 16:28:12.890000 audit: BPF prog-id=76 op=LOAD Jun 25 16:28:12.890000 audit[2784]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2644 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363261326639626533356431616236623963373036373634343036 Jun 25 16:28:12.891000 audit: BPF prog-id=77 op=LOAD Jun 25 16:28:12.891000 audit[2784]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2644 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363261326639626533356431616236623963373036373634343036 Jun 25 16:28:12.891000 audit: BPF prog-id=77 op=UNLOAD Jun 25 16:28:12.891000 audit: BPF prog-id=76 op=UNLOAD Jun 25 16:28:12.891000 audit: BPF prog-id=78 op=LOAD Jun 25 16:28:12.891000 audit[2784]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2644 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:12.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363261326639626533356431616236623963373036373634343036 Jun 25 16:28:12.977548 containerd[1802]: time="2024-06-25T16:28:12.977492669Z" level=info msg="StartContainer for \"3462a2f9be35d1ab6b9c7067644065e560664c6a41dc0e9239d66b169d288d7b\" returns successfully" Jun 25 16:28:12.992667 containerd[1802]: time="2024-06-25T16:28:12.992550645Z" level=info msg="StartContainer for \"8c14f11417c30d4775eb6aae3720d2ddd988f3513e2666e34dfb7b993d2a65d2\" returns successfully" Jun 25 16:28:12.992912 containerd[1802]: time="2024-06-25T16:28:12.992550642Z" level=info msg="StartContainer for \"8651460d77c0b919b12973b43cb74d25027547f6c39ef0dcf6d3fd43a612cb34\" returns successfully" Jun 25 16:28:13.150740 kubelet[2580]: E0625 16:28:13.150710 2580 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.172:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:13.738633 kubelet[2580]: W0625 16:28:13.738579 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.18.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-172&limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:13.738633 kubelet[2580]: E0625 16:28:13.738641 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-172&limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:14.061632 kubelet[2580]: E0625 16:28:14.061593 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-172?timeout=10s\": dial tcp 172.31.18.172:6443: connect: connection refused" interval="3.2s" Jun 25 16:28:14.164728 kubelet[2580]: I0625 16:28:14.164696 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-172" Jun 25 16:28:14.165142 kubelet[2580]: E0625 16:28:14.165050 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.172:6443/api/v1/nodes\": dial tcp 172.31.18.172:6443: connect: connection refused" node="ip-172-31-18-172" Jun 25 16:28:14.265266 kubelet[2580]: W0625 16:28:14.265222 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.18.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:14.265266 kubelet[2580]: E0625 16:28:14.265274 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:14.616032 kernel: kauditd_printk_skb: 129 callbacks suppressed Jun 25 16:28:14.616182 kernel: audit: type=1400 audit(1719332894.611:332): avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:14.611000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:14.611000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0006fa000 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:14.624210 kernel: audit: type=1300 audit(1719332894.611:332): arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0006fa000 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:14.611000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:14.628208 kernel: audit: type=1327 audit(1719332894.611:332): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:14.629092 kubelet[2580]: W0625 16:28:14.629045 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.18.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:14.629214 kubelet[2580]: E0625 16:28:14.629100 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.172:6443: connect: connection refused Jun 25 16:28:14.626000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:14.634330 kernel: audit: type=1400 audit(1719332894.626:333): avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:14.626000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000362260 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:14.639209 kernel: audit: type=1300 audit(1719332894.626:333): arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000362260 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:14.626000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:14.645222 kernel: audit: type=1327 audit(1719332894.626:333): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:16.389000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:16.389000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:16.395226 kernel: audit: type=1400 audit(1719332896.389:334): avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:16.395398 kernel: audit: type=1400 audit(1719332896.389:335): avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:16.398564 kernel: audit: type=1300 audit(1719332896.389:335): arch=c000003e syscall=254 success=no exit=-13 a0=41 a1=c007a5d9b0 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:28:16.389000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=41 a1=c007a5d9b0 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:28:16.402089 kernel: audit: type=1327 audit(1719332896.389:335): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:16.389000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:16.389000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c003897d40 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:28:16.389000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:16.390000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:16.390000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c003ee4120 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:28:16.390000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:16.390000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:16.390000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c003897e90 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:28:16.390000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:16.409000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:16.409000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=48 a1=c003ee4640 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:28:16.409000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:16.410000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:16.410000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=48 a1=c003322ed0 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:28:16.410000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:16.915050 kubelet[2580]: E0625 16:28:16.915014 2580 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-172" not found Jun 25 16:28:17.267412 kubelet[2580]: E0625 16:28:17.267376 2580 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-172\" not found" node="ip-172-31-18-172" Jun 25 16:28:17.295211 kubelet[2580]: E0625 16:28:17.295156 2580 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-172" not found Jun 25 16:28:17.367970 kubelet[2580]: I0625 16:28:17.367924 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-172" Jun 25 16:28:17.408207 kubelet[2580]: I0625 16:28:17.408110 2580 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-172" Jun 25 16:28:17.427988 kubelet[2580]: E0625 16:28:17.427942 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:17.529018 kubelet[2580]: E0625 16:28:17.528863 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:17.630530 kubelet[2580]: E0625 16:28:17.630490 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:17.731486 kubelet[2580]: E0625 16:28:17.731451 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:17.832248 kubelet[2580]: E0625 16:28:17.832098 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:17.932753 kubelet[2580]: E0625 16:28:17.932715 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:18.033374 kubelet[2580]: E0625 16:28:18.033338 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:18.134617 kubelet[2580]: E0625 16:28:18.134512 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:18.235034 kubelet[2580]: E0625 16:28:18.234999 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:18.336114 kubelet[2580]: E0625 16:28:18.336071 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:18.436488 kubelet[2580]: E0625 16:28:18.436313 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:18.537578 kubelet[2580]: E0625 16:28:18.537533 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-172\" not found" Jun 25 16:28:18.657391 update_engine[1791]: I0625 16:28:18.657331 1791 update_attempter.cc:509] Updating boot flags... Jun 25 16:28:18.772279 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2874) Jun 25 16:28:19.025315 kubelet[2580]: I0625 16:28:19.024276 2580 apiserver.go:52] "Watching apiserver" Jun 25 16:28:19.053633 kubelet[2580]: I0625 16:28:19.053601 2580 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:28:19.140229 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2873) Jun 25 16:28:19.343323 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2873) Jun 25 16:28:19.693970 systemd[1]: Reloading. Jun 25 16:28:20.046000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:20.047372 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 16:28:20.047463 kernel: audit: type=1400 audit(1719332900.046:340): avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:20.046000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000827a00 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:20.053254 kernel: audit: type=1300 audit(1719332900.046:340): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000827a00 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:20.046000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:20.060366 kernel: audit: type=1327 audit(1719332900.046:340): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:20.060454 kernel: audit: type=1400 audit(1719332900.052:341): avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:20.052000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:20.064641 kernel: audit: type=1300 audit(1719332900.052:341): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000bda440 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:20.052000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000bda440 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:20.068540 kernel: audit: type=1327 audit(1719332900.052:341): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:20.052000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:20.071234 kernel: audit: type=1400 audit(1719332900.053:342): avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:20.053000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:20.053000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000c48d20 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:20.083399 kernel: audit: type=1300 audit(1719332900.053:342): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000c48d20 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:20.083486 kernel: audit: type=1327 audit(1719332900.053:342): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:20.053000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:20.086501 kernel: audit: type=1400 audit(1719332900.053:343): avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:20.053000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:20.086468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:28:20.053000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000bda480 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:20.053000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:20.202000 audit: BPF prog-id=79 op=LOAD Jun 25 16:28:20.202000 audit: BPF prog-id=80 op=LOAD Jun 25 16:28:20.202000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:28:20.202000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:28:20.204000 audit: BPF prog-id=81 op=LOAD Jun 25 16:28:20.204000 audit: BPF prog-id=71 op=UNLOAD Jun 25 16:28:20.205000 audit: BPF prog-id=82 op=LOAD Jun 25 16:28:20.205000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:28:20.207000 audit: BPF prog-id=83 op=LOAD Jun 25 16:28:20.207000 audit: BPF prog-id=75 op=UNLOAD Jun 25 16:28:20.208000 audit: BPF prog-id=84 op=LOAD Jun 25 16:28:20.208000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:28:20.208000 audit: BPF prog-id=85 op=LOAD Jun 25 16:28:20.208000 audit: BPF prog-id=86 op=LOAD Jun 25 16:28:20.208000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:28:20.208000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:28:20.209000 audit: BPF prog-id=87 op=LOAD Jun 25 16:28:20.209000 audit: BPF prog-id=63 op=UNLOAD Jun 25 16:28:20.209000 audit: BPF prog-id=88 op=LOAD Jun 25 16:28:20.209000 audit: BPF prog-id=67 op=UNLOAD Jun 25 16:28:20.211000 audit: BPF prog-id=89 op=LOAD Jun 25 16:28:20.211000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:28:20.212000 audit: BPF prog-id=90 op=LOAD Jun 25 16:28:20.212000 audit: BPF prog-id=91 op=LOAD Jun 25 16:28:20.212000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:28:20.212000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:28:20.213000 audit: BPF prog-id=92 op=LOAD Jun 25 16:28:20.213000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:28:20.215000 audit: BPF prog-id=93 op=LOAD Jun 25 16:28:20.215000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:28:20.216000 audit: BPF prog-id=94 op=LOAD Jun 25 16:28:20.216000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:28:20.218000 audit: BPF prog-id=95 op=LOAD Jun 25 16:28:20.218000 audit: BPF prog-id=55 op=UNLOAD Jun 25 16:28:20.220000 audit: BPF prog-id=96 op=LOAD Jun 25 16:28:20.220000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:28:20.220000 audit: BPF prog-id=97 op=LOAD Jun 25 16:28:20.220000 audit: BPF prog-id=98 op=LOAD Jun 25 16:28:20.220000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:28:20.221000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:28:20.240296 kubelet[2580]: I0625 16:28:20.240265 2580 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:28:20.240407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:20.262619 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:28:20.262869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:20.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:20.267819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:20.697411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:20.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:20.808499 kubelet[3202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:28:20.808499 kubelet[3202]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:28:20.808499 kubelet[3202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:28:20.809025 kubelet[3202]: I0625 16:28:20.808628 3202 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:28:20.822732 kubelet[3202]: I0625 16:28:20.822692 3202 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 16:28:20.822732 kubelet[3202]: I0625 16:28:20.822730 3202 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:28:20.823049 kubelet[3202]: I0625 16:28:20.823032 3202 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 16:28:20.825308 kubelet[3202]: I0625 16:28:20.825273 3202 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:28:20.838226 kubelet[3202]: I0625 16:28:20.837425 3202 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:28:20.852025 kubelet[3202]: I0625 16:28:20.851993 3202 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:28:20.852358 kubelet[3202]: I0625 16:28:20.852336 3202 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:28:20.852679 kubelet[3202]: I0625 16:28:20.852642 3202 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:28:20.852814 kubelet[3202]: I0625 16:28:20.852706 3202 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:28:20.852814 kubelet[3202]: I0625 16:28:20.852744 3202 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:28:20.852814 kubelet[3202]: I0625 16:28:20.852780 3202 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:28:20.852980 kubelet[3202]: I0625 16:28:20.852929 3202 kubelet.go:396] "Attempting to sync node with API server" Jun 25 16:28:20.852980 kubelet[3202]: I0625 16:28:20.852953 3202 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:28:20.859393 kubelet[3202]: I0625 16:28:20.859365 3202 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:28:20.859561 kubelet[3202]: I0625 16:28:20.859552 3202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:28:20.862235 kubelet[3202]: I0625 16:28:20.862198 3202 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:28:20.865862 kubelet[3202]: I0625 16:28:20.865825 3202 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:28:20.866602 kubelet[3202]: I0625 16:28:20.866579 3202 server.go:1256] "Started kubelet" Jun 25 16:28:20.870662 kubelet[3202]: I0625 16:28:20.870336 3202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:28:20.883682 kubelet[3202]: I0625 16:28:20.883553 3202 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:28:20.896883 kubelet[3202]: E0625 16:28:20.896851 3202 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:28:20.897232 kubelet[3202]: I0625 16:28:20.897179 3202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:28:20.898875 kubelet[3202]: I0625 16:28:20.898850 3202 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:28:20.904330 kubelet[3202]: I0625 16:28:20.904288 3202 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:28:20.905720 kubelet[3202]: I0625 16:28:20.905701 3202 server.go:461] "Adding debug handlers to kubelet server" Jun 25 16:28:20.918133 kubelet[3202]: I0625 16:28:20.918102 3202 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:28:20.918663 kubelet[3202]: I0625 16:28:20.918648 3202 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:28:20.929000 kubelet[3202]: I0625 16:28:20.928975 3202 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:28:20.929326 kubelet[3202]: I0625 16:28:20.929304 3202 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:28:20.934868 kubelet[3202]: I0625 16:28:20.934845 3202 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:28:20.939055 kubelet[3202]: I0625 16:28:20.939022 3202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:28:20.945425 kubelet[3202]: I0625 16:28:20.945397 3202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:28:20.945623 kubelet[3202]: I0625 16:28:20.945610 3202 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:28:20.945715 kubelet[3202]: I0625 16:28:20.945706 3202 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 16:28:20.945833 kubelet[3202]: E0625 16:28:20.945823 3202 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:28:21.010577 kubelet[3202]: I0625 16:28:21.010479 3202 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-172" Jun 25 16:28:21.028418 kubelet[3202]: I0625 16:28:21.028378 3202 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-172" Jun 25 16:28:21.028574 kubelet[3202]: I0625 16:28:21.028467 3202 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-172" Jun 25 16:28:21.043806 kubelet[3202]: I0625 16:28:21.043782 3202 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:28:21.044001 kubelet[3202]: I0625 16:28:21.043988 3202 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:28:21.044100 kubelet[3202]: I0625 16:28:21.044091 3202 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:28:21.044366 kubelet[3202]: I0625 16:28:21.044354 3202 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:28:21.044472 kubelet[3202]: I0625 16:28:21.044463 3202 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:28:21.044539 kubelet[3202]: I0625 16:28:21.044531 3202 policy_none.go:49] "None policy: Start" Jun 25 16:28:21.045344 kubelet[3202]: I0625 16:28:21.045328 3202 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:28:21.045453 kubelet[3202]: I0625 16:28:21.045443 3202 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:28:21.045692 kubelet[3202]: I0625 16:28:21.045679 3202 state_mem.go:75] "Updated machine memory state" Jun 25 16:28:21.047880 kubelet[3202]: E0625 16:28:21.047863 3202 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:28:21.055180 kubelet[3202]: I0625 16:28:21.055122 3202 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:28:21.055503 kubelet[3202]: I0625 16:28:21.055481 3202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:28:21.249433 kubelet[3202]: I0625 16:28:21.249386 3202 topology_manager.go:215] "Topology Admit Handler" podUID="6ddac5dedee0235c378b151d1454ecb7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-172" Jun 25 16:28:21.249682 kubelet[3202]: I0625 16:28:21.249504 3202 topology_manager.go:215] "Topology Admit Handler" podUID="e9d0135895eec771457df072d43819f2" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:21.249682 kubelet[3202]: I0625 16:28:21.249550 3202 topology_manager.go:215] "Topology Admit Handler" podUID="57edb8ee14f3ba9ce83379e5809bb44e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-172" Jun 25 16:28:21.273880 kubelet[3202]: E0625 16:28:21.269586 3202 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-18-172\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-172" Jun 25 16:28:21.279933 kubelet[3202]: E0625 16:28:21.279883 3202 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-18-172\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:21.321850 kubelet[3202]: I0625 16:28:21.321803 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:21.321850 kubelet[3202]: I0625 16:28:21.321857 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:21.322079 kubelet[3202]: I0625 16:28:21.321885 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57edb8ee14f3ba9ce83379e5809bb44e-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-172\" (UID: \"57edb8ee14f3ba9ce83379e5809bb44e\") " pod="kube-system/kube-scheduler-ip-172-31-18-172" Jun 25 16:28:21.322079 kubelet[3202]: I0625 16:28:21.321927 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ddac5dedee0235c378b151d1454ecb7-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-172\" (UID: \"6ddac5dedee0235c378b151d1454ecb7\") " pod="kube-system/kube-apiserver-ip-172-31-18-172" Jun 25 16:28:21.322079 kubelet[3202]: I0625 16:28:21.321959 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ddac5dedee0235c378b151d1454ecb7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-172\" (UID: \"6ddac5dedee0235c378b151d1454ecb7\") " pod="kube-system/kube-apiserver-ip-172-31-18-172" Jun 25 16:28:21.322079 kubelet[3202]: I0625 16:28:21.321984 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:21.322079 kubelet[3202]: I0625 16:28:21.322019 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:21.322559 kubelet[3202]: I0625 16:28:21.322048 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ddac5dedee0235c378b151d1454ecb7-ca-certs\") pod \"kube-apiserver-ip-172-31-18-172\" (UID: \"6ddac5dedee0235c378b151d1454ecb7\") " pod="kube-system/kube-apiserver-ip-172-31-18-172" Jun 25 16:28:21.322559 kubelet[3202]: I0625 16:28:21.322081 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9d0135895eec771457df072d43819f2-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-172\" (UID: \"e9d0135895eec771457df072d43819f2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-172" Jun 25 16:28:21.876331 kubelet[3202]: I0625 16:28:21.876291 3202 apiserver.go:52] "Watching apiserver" Jun 25 16:28:21.924286 kubelet[3202]: I0625 16:28:21.924235 3202 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:28:22.009578 kubelet[3202]: I0625 16:28:22.009542 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-172" podStartSLOduration=3.009463359 podStartE2EDuration="3.009463359s" podCreationTimestamp="2024-06-25 16:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:21.976111107 +0000 UTC m=+1.268343396" watchObservedRunningTime="2024-06-25 16:28:22.009463359 +0000 UTC m=+1.301695635" Jun 25 16:28:22.010011 kubelet[3202]: I0625 16:28:22.009991 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-172" podStartSLOduration=3.009934211 podStartE2EDuration="3.009934211s" podCreationTimestamp="2024-06-25 16:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:22.006974563 +0000 UTC m=+1.299206839" watchObservedRunningTime="2024-06-25 16:28:22.009934211 +0000 UTC m=+1.302166486" Jun 25 16:28:22.024760 kubelet[3202]: I0625 16:28:22.024717 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-172" podStartSLOduration=1.024657406 podStartE2EDuration="1.024657406s" podCreationTimestamp="2024-06-25 16:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:22.0238924 +0000 UTC m=+1.316124675" watchObservedRunningTime="2024-06-25 16:28:22.024657406 +0000 UTC m=+1.316889676" Jun 25 16:28:25.365424 sudo[2089]: pam_unix(sudo:session): session closed for user root Jun 25 16:28:25.365000 audit[2089]: USER_END pid=2089 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:28:25.366517 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:28:25.366593 kernel: audit: type=1106 audit(1719332905.365:386): pid=2089 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:28:25.366000 audit[2089]: CRED_DISP pid=2089 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:28:25.371950 kernel: audit: type=1104 audit(1719332905.366:387): pid=2089 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:28:25.401961 sshd[2086]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:25.408000 audit[2086]: USER_END pid=2086 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.408000 audit[2086]: CRED_DISP pid=2086 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.411376 systemd[1]: sshd@6-172.31.18.172:22-139.178.89.65:55296.service: Deactivated successfully. Jun 25 16:28:25.412332 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:28:25.412494 systemd[1]: session-7.scope: Consumed 5.991s CPU time. Jun 25 16:28:25.414207 systemd-logind[1790]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:28:25.419881 kernel: audit: type=1106 audit(1719332905.408:388): pid=2086 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.419925 kernel: audit: type=1104 audit(1719332905.408:389): pid=2086 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.419958 kernel: audit: type=1131 audit(1719332905.411:390): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.172:22-139.178.89.65:55296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:25.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.172:22-139.178.89.65:55296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:25.430576 systemd-logind[1790]: Removed session 7. Jun 25 16:28:31.133000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="nvme0n1p9" ino=7830 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:28:31.142325 kernel: audit: type=1400 audit(1719332911.133:391): avc: denied { watch } for pid=2803 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="nvme0n1p9" ino=7830 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:28:31.142509 kernel: audit: type=1300 audit(1719332911.133:391): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001075540 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:31.142553 kernel: audit: type=1327 audit(1719332911.133:391): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:31.133000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001075540 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:28:31.133000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:32.986877 kubelet[3202]: I0625 16:28:32.986846 3202 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:28:32.987388 containerd[1802]: time="2024-06-25T16:28:32.987267039Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:28:32.987887 kubelet[3202]: I0625 16:28:32.987871 3202 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:28:33.814834 kubelet[3202]: I0625 16:28:33.814741 3202 topology_manager.go:215] "Topology Admit Handler" podUID="f8ae1b08-51d8-42be-a4bb-cec74cda9b57" podNamespace="kube-system" podName="kube-proxy-hdsr4" Jun 25 16:28:33.834609 systemd[1]: Created slice kubepods-besteffort-podf8ae1b08_51d8_42be_a4bb_cec74cda9b57.slice - libcontainer container kubepods-besteffort-podf8ae1b08_51d8_42be_a4bb_cec74cda9b57.slice. Jun 25 16:28:33.930856 kubelet[3202]: I0625 16:28:33.930820 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8ae1b08-51d8-42be-a4bb-cec74cda9b57-xtables-lock\") pod \"kube-proxy-hdsr4\" (UID: \"f8ae1b08-51d8-42be-a4bb-cec74cda9b57\") " pod="kube-system/kube-proxy-hdsr4" Jun 25 16:28:33.931145 kubelet[3202]: I0625 16:28:33.931121 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wksb5\" (UniqueName: \"kubernetes.io/projected/f8ae1b08-51d8-42be-a4bb-cec74cda9b57-kube-api-access-wksb5\") pod \"kube-proxy-hdsr4\" (UID: \"f8ae1b08-51d8-42be-a4bb-cec74cda9b57\") " pod="kube-system/kube-proxy-hdsr4" Jun 25 16:28:33.931286 kubelet[3202]: I0625 16:28:33.931274 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8ae1b08-51d8-42be-a4bb-cec74cda9b57-lib-modules\") pod \"kube-proxy-hdsr4\" (UID: \"f8ae1b08-51d8-42be-a4bb-cec74cda9b57\") " pod="kube-system/kube-proxy-hdsr4" Jun 25 16:28:33.931423 kubelet[3202]: I0625 16:28:33.931411 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8ae1b08-51d8-42be-a4bb-cec74cda9b57-kube-proxy\") pod \"kube-proxy-hdsr4\" (UID: \"f8ae1b08-51d8-42be-a4bb-cec74cda9b57\") " pod="kube-system/kube-proxy-hdsr4" Jun 25 16:28:34.084488 kubelet[3202]: I0625 16:28:34.084106 3202 topology_manager.go:215] "Topology Admit Handler" podUID="873e7069-2a7a-4488-8818-889e864f65ec" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-rmm44" Jun 25 16:28:34.103895 systemd[1]: Created slice kubepods-besteffort-pod873e7069_2a7a_4488_8818_889e864f65ec.slice - libcontainer container kubepods-besteffort-pod873e7069_2a7a_4488_8818_889e864f65ec.slice. Jun 25 16:28:34.112849 kubelet[3202]: W0625 16:28:34.112797 3202 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-18-172" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-18-172' and this object Jun 25 16:28:34.112849 kubelet[3202]: E0625 16:28:34.112850 3202 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-18-172" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-18-172' and this object Jun 25 16:28:34.132916 kubelet[3202]: I0625 16:28:34.132878 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8bnr\" (UniqueName: \"kubernetes.io/projected/873e7069-2a7a-4488-8818-889e864f65ec-kube-api-access-p8bnr\") pod \"tigera-operator-76c4974c85-rmm44\" (UID: \"873e7069-2a7a-4488-8818-889e864f65ec\") " pod="tigera-operator/tigera-operator-76c4974c85-rmm44" Jun 25 16:28:34.133139 kubelet[3202]: I0625 16:28:34.133041 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/873e7069-2a7a-4488-8818-889e864f65ec-var-lib-calico\") pod \"tigera-operator-76c4974c85-rmm44\" (UID: \"873e7069-2a7a-4488-8818-889e864f65ec\") " pod="tigera-operator/tigera-operator-76c4974c85-rmm44" Jun 25 16:28:34.144063 containerd[1802]: time="2024-06-25T16:28:34.143244391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdsr4,Uid:f8ae1b08-51d8-42be-a4bb-cec74cda9b57,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:34.208783 containerd[1802]: time="2024-06-25T16:28:34.208663223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:34.208988 containerd[1802]: time="2024-06-25T16:28:34.208826695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:34.208988 containerd[1802]: time="2024-06-25T16:28:34.208896190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:34.209131 containerd[1802]: time="2024-06-25T16:28:34.208987082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:34.254735 systemd[1]: Started cri-containerd-f983ecc50840f45f5e3bea134c7f20f4caf19e11a25e76476bf488e3848a5d75.scope - libcontainer container f983ecc50840f45f5e3bea134c7f20f4caf19e11a25e76476bf488e3848a5d75. Jun 25 16:28:34.282000 audit: BPF prog-id=99 op=LOAD Jun 25 16:28:34.284223 kernel: audit: type=1334 audit(1719332914.282:392): prog-id=99 op=LOAD Jun 25 16:28:34.288330 kernel: audit: type=1334 audit(1719332914.283:393): prog-id=100 op=LOAD Jun 25 16:28:34.288441 kernel: audit: type=1300 audit(1719332914.283:393): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.283000 audit: BPF prog-id=100 op=LOAD Jun 25 16:28:34.283000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639383365636335303834306634356635653362656131333463376632 Jun 25 16:28:34.292654 kernel: audit: type=1327 audit(1719332914.283:393): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639383365636335303834306634356635653362656131333463376632 Jun 25 16:28:34.294203 kernel: audit: type=1334 audit(1719332914.283:394): prog-id=101 op=LOAD Jun 25 16:28:34.283000 audit: BPF prog-id=101 op=LOAD Jun 25 16:28:34.283000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.301532 kernel: audit: type=1300 audit(1719332914.283:394): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639383365636335303834306634356635653362656131333463376632 Jun 25 16:28:34.319490 kernel: audit: type=1327 audit(1719332914.283:394): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639383365636335303834306634356635653362656131333463376632 Jun 25 16:28:34.285000 audit: BPF prog-id=101 op=UNLOAD Jun 25 16:28:34.285000 audit: BPF prog-id=100 op=UNLOAD Jun 25 16:28:34.285000 audit: BPF prog-id=102 op=LOAD Jun 25 16:28:34.285000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639383365636335303834306634356635653362656131333463376632 Jun 25 16:28:34.330532 containerd[1802]: time="2024-06-25T16:28:34.330415793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdsr4,Uid:f8ae1b08-51d8-42be-a4bb-cec74cda9b57,Namespace:kube-system,Attempt:0,} returns sandbox id \"f983ecc50840f45f5e3bea134c7f20f4caf19e11a25e76476bf488e3848a5d75\"" Jun 25 16:28:34.338536 containerd[1802]: time="2024-06-25T16:28:34.333878503Z" level=info msg="CreateContainer within sandbox \"f983ecc50840f45f5e3bea134c7f20f4caf19e11a25e76476bf488e3848a5d75\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:28:34.391167 containerd[1802]: time="2024-06-25T16:28:34.391107118Z" level=info msg="CreateContainer within sandbox \"f983ecc50840f45f5e3bea134c7f20f4caf19e11a25e76476bf488e3848a5d75\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b5a5fe766b08f229b5419b133cbe4b6054eabc6c3f2a583638e91fe4a55a252d\"" Jun 25 16:28:34.392105 containerd[1802]: time="2024-06-25T16:28:34.392028593Z" level=info msg="StartContainer for \"b5a5fe766b08f229b5419b133cbe4b6054eabc6c3f2a583638e91fe4a55a252d\"" Jun 25 16:28:34.415859 containerd[1802]: time="2024-06-25T16:28:34.415600535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rmm44,Uid:873e7069-2a7a-4488-8818-889e864f65ec,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:28:34.465563 systemd[1]: Started cri-containerd-b5a5fe766b08f229b5419b133cbe4b6054eabc6c3f2a583638e91fe4a55a252d.scope - libcontainer container b5a5fe766b08f229b5419b133cbe4b6054eabc6c3f2a583638e91fe4a55a252d. Jun 25 16:28:34.484666 containerd[1802]: time="2024-06-25T16:28:34.484561476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:34.484666 containerd[1802]: time="2024-06-25T16:28:34.484625955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:34.484899 containerd[1802]: time="2024-06-25T16:28:34.484850119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:34.485018 containerd[1802]: time="2024-06-25T16:28:34.484891826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:34.510573 systemd[1]: Started cri-containerd-cd66713e508fb438f226bf75e7868eeb22ce2a15912bc903dd2491e7408a6d73.scope - libcontainer container cd66713e508fb438f226bf75e7868eeb22ce2a15912bc903dd2491e7408a6d73. Jun 25 16:28:34.516000 audit: BPF prog-id=103 op=LOAD Jun 25 16:28:34.516000 audit[3327]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3284 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235613566653736366230386632323962353431396231333363626534 Jun 25 16:28:34.516000 audit: BPF prog-id=104 op=LOAD Jun 25 16:28:34.516000 audit[3327]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3284 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235613566653736366230386632323962353431396231333363626534 Jun 25 16:28:34.516000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:28:34.516000 audit: BPF prog-id=103 op=UNLOAD Jun 25 16:28:34.516000 audit: BPF prog-id=105 op=LOAD Jun 25 16:28:34.516000 audit[3327]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3284 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235613566653736366230386632323962353431396231333363626534 Jun 25 16:28:34.544523 containerd[1802]: time="2024-06-25T16:28:34.544483208Z" level=info msg="StartContainer for \"b5a5fe766b08f229b5419b133cbe4b6054eabc6c3f2a583638e91fe4a55a252d\" returns successfully" Jun 25 16:28:34.550000 audit: BPF prog-id=106 op=LOAD Jun 25 16:28:34.551000 audit: BPF prog-id=107 op=LOAD Jun 25 16:28:34.551000 audit[3358]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3347 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363637313365353038666234333866323236626637356537383638 Jun 25 16:28:34.551000 audit: BPF prog-id=108 op=LOAD Jun 25 16:28:34.551000 audit[3358]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3347 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363637313365353038666234333866323236626637356537383638 Jun 25 16:28:34.551000 audit: BPF prog-id=108 op=UNLOAD Jun 25 16:28:34.551000 audit: BPF prog-id=107 op=UNLOAD Jun 25 16:28:34.551000 audit: BPF prog-id=109 op=LOAD Jun 25 16:28:34.551000 audit[3358]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3347 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363637313365353038666234333866323236626637356537383638 Jun 25 16:28:34.603118 containerd[1802]: time="2024-06-25T16:28:34.602984331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rmm44,Uid:873e7069-2a7a-4488-8818-889e864f65ec,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cd66713e508fb438f226bf75e7868eeb22ce2a15912bc903dd2491e7408a6d73\"" Jun 25 16:28:34.605566 containerd[1802]: time="2024-06-25T16:28:34.605502556Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:28:35.083162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829847295.mount: Deactivated successfully. Jun 25 16:28:35.402000 audit[3422]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.402000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb589a110 a2=0 a3=7ffeb589a0fc items=0 ppid=3352 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:28:35.404000 audit[3423]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=3423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.404000 audit[3423]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1f7018a0 a2=0 a3=7fff1f70188c items=0 ppid=3352 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:28:35.406000 audit[3424]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=3424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.406000 audit[3424]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8a4315e0 a2=0 a3=7fff8a4315cc items=0 ppid=3352 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:28:35.407000 audit[3425]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.407000 audit[3425]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef28d3760 a2=0 a3=7ffef28d374c items=0 ppid=3352 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:28:35.409000 audit[3426]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=3426 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.409000 audit[3426]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe23ba16f0 a2=0 a3=7ffe23ba16dc items=0 ppid=3352 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:28:35.411000 audit[3427]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=3427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.411000 audit[3427]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb906f810 a2=0 a3=7ffcb906f7fc items=0 ppid=3352 pid=3427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:28:35.518000 audit[3428]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.518000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffaee646a0 a2=0 a3=7fffaee6468c items=0 ppid=3352 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:28:35.525000 audit[3430]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.525000 audit[3430]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeac272fe0 a2=0 a3=7ffeac272fcc items=0 ppid=3352 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:28:35.543000 audit[3433]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.543000 audit[3433]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc5717a6e0 a2=0 a3=7ffc5717a6cc items=0 ppid=3352 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.543000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:28:35.547000 audit[3434]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.547000 audit[3434]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3bb2a880 a2=0 a3=7fff3bb2a86c items=0 ppid=3352 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:28:35.551000 audit[3436]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.551000 audit[3436]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd232dcab0 a2=0 a3=7ffd232dca9c items=0 ppid=3352 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.551000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:28:35.554000 audit[3437]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.554000 audit[3437]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc00c497e0 a2=0 a3=7ffc00c497cc items=0 ppid=3352 pid=3437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:28:35.559000 audit[3439]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.559000 audit[3439]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce4c7dc20 a2=0 a3=7ffce4c7dc0c items=0 ppid=3352 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:28:35.565000 audit[3442]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.565000 audit[3442]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffde40b22f0 a2=0 a3=7ffde40b22dc items=0 ppid=3352 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:28:35.568000 audit[3443]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.568000 audit[3443]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd38509db0 a2=0 a3=7ffd38509d9c items=0 ppid=3352 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:28:35.572000 audit[3445]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.572000 audit[3445]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe767f660 a2=0 a3=7fffe767f64c items=0 ppid=3352 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:28:35.574000 audit[3446]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.574000 audit[3446]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb5a9e030 a2=0 a3=7ffeb5a9e01c items=0 ppid=3352 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:28:35.579000 audit[3448]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.579000 audit[3448]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff04aa3300 a2=0 a3=7fff04aa32ec items=0 ppid=3352 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:28:35.585000 audit[3451]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.585000 audit[3451]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc618edc40 a2=0 a3=7ffc618edc2c items=0 ppid=3352 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:28:35.591000 audit[3454]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.591000 audit[3454]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc43070fb0 a2=0 a3=7ffc43070f9c items=0 ppid=3352 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:28:35.594000 audit[3455]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.594000 audit[3455]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffefd8429f0 a2=0 a3=7ffefd8429dc items=0 ppid=3352 pid=3455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.594000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:28:35.598000 audit[3457]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.598000 audit[3457]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffa46a31f0 a2=0 a3=7fffa46a31dc items=0 ppid=3352 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:28:35.605000 audit[3460]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.605000 audit[3460]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec6e9e240 a2=0 a3=7ffec6e9e22c items=0 ppid=3352 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.605000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:28:35.610000 audit[3461]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.610000 audit[3461]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe304e45a0 a2=0 a3=7ffe304e458c items=0 ppid=3352 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:28:35.614000 audit[3463]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:35.614000 audit[3463]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcf324a7f0 a2=0 a3=7ffcf324a7dc items=0 ppid=3352 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.614000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:28:35.663000 audit[3469]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:35.663000 audit[3469]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe682737b0 a2=0 a3=7ffe6827379c items=0 ppid=3352 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:35.677000 audit[3469]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:35.677000 audit[3469]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe682737b0 a2=0 a3=7ffe6827379c items=0 ppid=3352 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:35.680000 audit[3475]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.680000 audit[3475]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffea1fc0a40 a2=0 a3=7ffea1fc0a2c items=0 ppid=3352 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:28:35.685000 audit[3477]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.685000 audit[3477]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe3b0b6610 a2=0 a3=7ffe3b0b65fc items=0 ppid=3352 pid=3477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:28:35.694000 audit[3480]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.694000 audit[3480]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe2e423df0 a2=0 a3=7ffe2e423ddc items=0 ppid=3352 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.694000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:28:35.697000 audit[3481]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.697000 audit[3481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf45b0d30 a2=0 a3=7ffdf45b0d1c items=0 ppid=3352 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:28:35.701000 audit[3483]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.701000 audit[3483]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1f71beb0 a2=0 a3=7fff1f71be9c items=0 ppid=3352 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.701000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:28:35.703000 audit[3484]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.703000 audit[3484]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb6eccaf0 a2=0 a3=7ffdb6eccadc items=0 ppid=3352 pid=3484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.703000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:28:35.708000 audit[3486]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.708000 audit[3486]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffcb0523b0 a2=0 a3=7fffcb05239c items=0 ppid=3352 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:28:35.719000 audit[3489]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.719000 audit[3489]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff12dd1bd0 a2=0 a3=7fff12dd1bbc items=0 ppid=3352 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.719000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:28:35.722000 audit[3490]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.722000 audit[3490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1299ee00 a2=0 a3=7ffe1299edec items=0 ppid=3352 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.722000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:28:35.731000 audit[3492]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.731000 audit[3492]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3f132080 a2=0 a3=7ffc3f13206c items=0 ppid=3352 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.731000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:28:35.739000 audit[3493]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.739000 audit[3493]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd30e499f0 a2=0 a3=7ffd30e499dc items=0 ppid=3352 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.739000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:28:35.748000 audit[3495]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.748000 audit[3495]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff40605490 a2=0 a3=7fff4060547c items=0 ppid=3352 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:28:35.758000 audit[3498]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.758000 audit[3498]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdd72ae20 a2=0 a3=7ffcdd72ae0c items=0 ppid=3352 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:28:35.768000 audit[3501]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.768000 audit[3501]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeafaa9e00 a2=0 a3=7ffeafaa9dec items=0 ppid=3352 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.768000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:28:35.784000 audit[3502]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.784000 audit[3502]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffde432eb0 a2=0 a3=7fffde432e9c items=0 ppid=3352 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:28:35.790000 audit[3504]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.790000 audit[3504]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdb00ee1c0 a2=0 a3=7ffdb00ee1ac items=0 ppid=3352 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:28:35.802000 audit[3507]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.802000 audit[3507]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe9744b460 a2=0 a3=7ffe9744b44c items=0 ppid=3352 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:28:35.806000 audit[3508]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.806000 audit[3508]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6adfcd00 a2=0 a3=7ffe6adfccec items=0 ppid=3352 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.806000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:28:35.813000 audit[3510]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.813000 audit[3510]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffceabb1870 a2=0 a3=7ffceabb185c items=0 ppid=3352 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:28:35.821000 audit[3511]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.821000 audit[3511]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce960ee60 a2=0 a3=7ffce960ee4c items=0 ppid=3352 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:28:35.828000 audit[3513]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.828000 audit[3513]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffce2498ac0 a2=0 a3=7ffce2498aac items=0 ppid=3352 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:28:35.837000 audit[3516]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:35.837000 audit[3516]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffebbe23e60 a2=0 a3=7ffebbe23e4c items=0 ppid=3352 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:28:35.852000 audit[3518]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:28:35.852000 audit[3518]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffdd261e560 a2=0 a3=7ffdd261e54c items=0 ppid=3352 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.852000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:35.856000 audit[3518]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:28:35.856000 audit[3518]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdd261e560 a2=0 a3=7ffdd261e54c items=0 ppid=3352 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.856000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:36.026234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3075739718.mount: Deactivated successfully. Jun 25 16:28:36.777057 containerd[1802]: time="2024-06-25T16:28:36.777007389Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:36.779383 containerd[1802]: time="2024-06-25T16:28:36.779324149Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076080" Jun 25 16:28:36.782077 containerd[1802]: time="2024-06-25T16:28:36.782037429Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:36.785162 containerd[1802]: time="2024-06-25T16:28:36.785125230Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:36.788221 containerd[1802]: time="2024-06-25T16:28:36.788168461Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:36.789044 containerd[1802]: time="2024-06-25T16:28:36.789005939Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.183435631s" Jun 25 16:28:36.789235 containerd[1802]: time="2024-06-25T16:28:36.789206763Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:28:36.791490 containerd[1802]: time="2024-06-25T16:28:36.791460466Z" level=info msg="CreateContainer within sandbox \"cd66713e508fb438f226bf75e7868eeb22ce2a15912bc903dd2491e7408a6d73\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:28:36.826627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254451891.mount: Deactivated successfully. Jun 25 16:28:36.850494 containerd[1802]: time="2024-06-25T16:28:36.850440021Z" level=info msg="CreateContainer within sandbox \"cd66713e508fb438f226bf75e7868eeb22ce2a15912bc903dd2491e7408a6d73\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4c82084be3695c27d43de75e550e15904733b23b82e95da4def7a1fe5ca67146\"" Jun 25 16:28:36.852639 containerd[1802]: time="2024-06-25T16:28:36.851204713Z" level=info msg="StartContainer for \"4c82084be3695c27d43de75e550e15904733b23b82e95da4def7a1fe5ca67146\"" Jun 25 16:28:36.910393 systemd[1]: Started cri-containerd-4c82084be3695c27d43de75e550e15904733b23b82e95da4def7a1fe5ca67146.scope - libcontainer container 4c82084be3695c27d43de75e550e15904733b23b82e95da4def7a1fe5ca67146. Jun 25 16:28:36.928000 audit: BPF prog-id=110 op=LOAD Jun 25 16:28:36.929725 kernel: kauditd_printk_skb: 181 callbacks suppressed Jun 25 16:28:36.929808 kernel: audit: type=1334 audit(1719332916.928:460): prog-id=110 op=LOAD Jun 25 16:28:36.929000 audit: BPF prog-id=111 op=LOAD Jun 25 16:28:36.929000 audit[3534]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3347 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:36.935844 kernel: audit: type=1334 audit(1719332916.929:461): prog-id=111 op=LOAD Jun 25 16:28:36.935933 kernel: audit: type=1300 audit(1719332916.929:461): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3347 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:36.935984 kernel: audit: type=1327 audit(1719332916.929:461): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463383230383462653336393563323764343364653735653535306531 Jun 25 16:28:36.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463383230383462653336393563323764343364653735653535306531 Jun 25 16:28:36.938218 kernel: audit: type=1334 audit(1719332916.929:462): prog-id=112 op=LOAD Jun 25 16:28:36.938292 kernel: audit: type=1300 audit(1719332916.929:462): arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3347 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:36.929000 audit: BPF prog-id=112 op=LOAD Jun 25 16:28:36.929000 audit[3534]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3347 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:36.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463383230383462653336393563323764343364653735653535306531 Jun 25 16:28:36.946069 kernel: audit: type=1327 audit(1719332916.929:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463383230383462653336393563323764343364653735653535306531 Jun 25 16:28:36.946199 kernel: audit: type=1334 audit(1719332916.929:463): prog-id=112 op=UNLOAD Jun 25 16:28:36.929000 audit: BPF prog-id=112 op=UNLOAD Jun 25 16:28:36.949562 kernel: audit: type=1334 audit(1719332916.929:464): prog-id=111 op=UNLOAD Jun 25 16:28:36.949650 kernel: audit: type=1334 audit(1719332916.929:465): prog-id=113 op=LOAD Jun 25 16:28:36.929000 audit: BPF prog-id=111 op=UNLOAD Jun 25 16:28:36.929000 audit: BPF prog-id=113 op=LOAD Jun 25 16:28:36.929000 audit[3534]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3347 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:36.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463383230383462653336393563323764343364653735653535306531 Jun 25 16:28:36.961031 containerd[1802]: time="2024-06-25T16:28:36.960004136Z" level=info msg="StartContainer for \"4c82084be3695c27d43de75e550e15904733b23b82e95da4def7a1fe5ca67146\" returns successfully" Jun 25 16:28:37.051106 kubelet[3202]: I0625 16:28:37.050976 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hdsr4" podStartSLOduration=4.050922096 podStartE2EDuration="4.050922096s" podCreationTimestamp="2024-06-25 16:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:35.048613881 +0000 UTC m=+14.340846158" watchObservedRunningTime="2024-06-25 16:28:37.050922096 +0000 UTC m=+16.343154371" Jun 25 16:28:40.145000 audit[3568]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:40.145000 audit[3568]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe42fe6380 a2=0 a3=7ffe42fe636c items=0 ppid=3352 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:40.148000 audit[3568]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:40.148000 audit[3568]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe42fe6380 a2=0 a3=0 items=0 ppid=3352 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:40.158000 audit[3570]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:40.158000 audit[3570]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff5e42eac0 a2=0 a3=7fff5e42eaac items=0 ppid=3352 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:40.159000 audit[3570]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:40.159000 audit[3570]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5e42eac0 a2=0 a3=0 items=0 ppid=3352 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.159000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:40.294291 kubelet[3202]: I0625 16:28:40.294242 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-rmm44" podStartSLOduration=4.109379896 podStartE2EDuration="6.294159959s" podCreationTimestamp="2024-06-25 16:28:34 +0000 UTC" firstStartedPulling="2024-06-25 16:28:34.605019735 +0000 UTC m=+13.897251990" lastFinishedPulling="2024-06-25 16:28:36.789799785 +0000 UTC m=+16.082032053" observedRunningTime="2024-06-25 16:28:37.051301514 +0000 UTC m=+16.343533788" watchObservedRunningTime="2024-06-25 16:28:40.294159959 +0000 UTC m=+19.586392289" Jun 25 16:28:40.295674 kubelet[3202]: I0625 16:28:40.295628 3202 topology_manager.go:215] "Topology Admit Handler" podUID="fefd928b-79d8-4dcd-902d-9fbc418a0fc1" podNamespace="calico-system" podName="calico-typha-866b59c96-4ft9s" Jun 25 16:28:40.306866 systemd[1]: Created slice kubepods-besteffort-podfefd928b_79d8_4dcd_902d_9fbc418a0fc1.slice - libcontainer container kubepods-besteffort-podfefd928b_79d8_4dcd_902d_9fbc418a0fc1.slice. Jun 25 16:28:40.383839 kubelet[3202]: I0625 16:28:40.383801 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fefd928b-79d8-4dcd-902d-9fbc418a0fc1-tigera-ca-bundle\") pod \"calico-typha-866b59c96-4ft9s\" (UID: \"fefd928b-79d8-4dcd-902d-9fbc418a0fc1\") " pod="calico-system/calico-typha-866b59c96-4ft9s" Jun 25 16:28:40.384075 kubelet[3202]: I0625 16:28:40.384050 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9gk\" (UniqueName: \"kubernetes.io/projected/fefd928b-79d8-4dcd-902d-9fbc418a0fc1-kube-api-access-6k9gk\") pod \"calico-typha-866b59c96-4ft9s\" (UID: \"fefd928b-79d8-4dcd-902d-9fbc418a0fc1\") " pod="calico-system/calico-typha-866b59c96-4ft9s" Jun 25 16:28:40.384165 kubelet[3202]: I0625 16:28:40.384100 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fefd928b-79d8-4dcd-902d-9fbc418a0fc1-typha-certs\") pod \"calico-typha-866b59c96-4ft9s\" (UID: \"fefd928b-79d8-4dcd-902d-9fbc418a0fc1\") " pod="calico-system/calico-typha-866b59c96-4ft9s" Jun 25 16:28:40.421151 kubelet[3202]: I0625 16:28:40.421028 3202 topology_manager.go:215] "Topology Admit Handler" podUID="4e153c31-2cce-478e-9190-130fbb441581" podNamespace="calico-system" podName="calico-node-mt2wn" Jun 25 16:28:40.430258 systemd[1]: Created slice kubepods-besteffort-pod4e153c31_2cce_478e_9190_130fbb441581.slice - libcontainer container kubepods-besteffort-pod4e153c31_2cce_478e_9190_130fbb441581.slice. Jun 25 16:28:40.551841 kubelet[3202]: I0625 16:28:40.551798 3202 topology_manager.go:215] "Topology Admit Handler" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" podNamespace="calico-system" podName="csi-node-driver-f7s8f" Jun 25 16:28:40.552307 kubelet[3202]: E0625 16:28:40.552286 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7s8f" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" Jun 25 16:28:40.586207 kubelet[3202]: I0625 16:28:40.586158 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e153c31-2cce-478e-9190-130fbb441581-lib-modules\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.586594 kubelet[3202]: I0625 16:28:40.586517 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4e153c31-2cce-478e-9190-130fbb441581-var-run-calico\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.586829 kubelet[3202]: I0625 16:28:40.586768 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e153c31-2cce-478e-9190-130fbb441581-xtables-lock\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.586990 kubelet[3202]: I0625 16:28:40.586977 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4e153c31-2cce-478e-9190-130fbb441581-policysync\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.587166 kubelet[3202]: I0625 16:28:40.587152 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e153c31-2cce-478e-9190-130fbb441581-var-lib-calico\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.587414 kubelet[3202]: I0625 16:28:40.587400 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4e153c31-2cce-478e-9190-130fbb441581-cni-bin-dir\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.587592 kubelet[3202]: I0625 16:28:40.587568 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxsb8\" (UniqueName: \"kubernetes.io/projected/4e153c31-2cce-478e-9190-130fbb441581-kube-api-access-dxsb8\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.587885 kubelet[3202]: I0625 16:28:40.587870 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4e153c31-2cce-478e-9190-130fbb441581-node-certs\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.588067 kubelet[3202]: I0625 16:28:40.588043 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4e153c31-2cce-478e-9190-130fbb441581-flexvol-driver-host\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.588297 kubelet[3202]: I0625 16:28:40.588272 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e153c31-2cce-478e-9190-130fbb441581-tigera-ca-bundle\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.588533 kubelet[3202]: I0625 16:28:40.588519 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4e153c31-2cce-478e-9190-130fbb441581-cni-net-dir\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.588737 kubelet[3202]: I0625 16:28:40.588725 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4e153c31-2cce-478e-9190-130fbb441581-cni-log-dir\") pod \"calico-node-mt2wn\" (UID: \"4e153c31-2cce-478e-9190-130fbb441581\") " pod="calico-system/calico-node-mt2wn" Jun 25 16:28:40.617078 containerd[1802]: time="2024-06-25T16:28:40.617032503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866b59c96-4ft9s,Uid:fefd928b-79d8-4dcd-902d-9fbc418a0fc1,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:40.669917 containerd[1802]: time="2024-06-25T16:28:40.669772796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:40.670204 containerd[1802]: time="2024-06-25T16:28:40.670148523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:40.670392 containerd[1802]: time="2024-06-25T16:28:40.670339600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:40.670611 containerd[1802]: time="2024-06-25T16:28:40.670551001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:40.696788 kubelet[3202]: I0625 16:28:40.691913 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb1b6fee-76dd-4b53-8bcb-f17d750a370e-kubelet-dir\") pod \"csi-node-driver-f7s8f\" (UID: \"cb1b6fee-76dd-4b53-8bcb-f17d750a370e\") " pod="calico-system/csi-node-driver-f7s8f" Jun 25 16:28:40.696788 kubelet[3202]: I0625 16:28:40.692057 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb1b6fee-76dd-4b53-8bcb-f17d750a370e-registration-dir\") pod \"csi-node-driver-f7s8f\" (UID: \"cb1b6fee-76dd-4b53-8bcb-f17d750a370e\") " pod="calico-system/csi-node-driver-f7s8f" Jun 25 16:28:40.696788 kubelet[3202]: I0625 16:28:40.692283 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw88k\" (UniqueName: \"kubernetes.io/projected/cb1b6fee-76dd-4b53-8bcb-f17d750a370e-kube-api-access-hw88k\") pod \"csi-node-driver-f7s8f\" (UID: \"cb1b6fee-76dd-4b53-8bcb-f17d750a370e\") " pod="calico-system/csi-node-driver-f7s8f" Jun 25 16:28:40.696788 kubelet[3202]: I0625 16:28:40.692340 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cb1b6fee-76dd-4b53-8bcb-f17d750a370e-varrun\") pod \"csi-node-driver-f7s8f\" (UID: \"cb1b6fee-76dd-4b53-8bcb-f17d750a370e\") " pod="calico-system/csi-node-driver-f7s8f" Jun 25 16:28:40.696788 kubelet[3202]: I0625 16:28:40.692369 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb1b6fee-76dd-4b53-8bcb-f17d750a370e-socket-dir\") pod \"csi-node-driver-f7s8f\" (UID: \"cb1b6fee-76dd-4b53-8bcb-f17d750a370e\") " pod="calico-system/csi-node-driver-f7s8f" Jun 25 16:28:40.696788 kubelet[3202]: E0625 16:28:40.696181 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.697150 kubelet[3202]: W0625 16:28:40.696226 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.697150 kubelet[3202]: E0625 16:28:40.696261 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.697150 kubelet[3202]: E0625 16:28:40.696562 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.697150 kubelet[3202]: W0625 16:28:40.696573 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.697880 kubelet[3202]: E0625 16:28:40.697383 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.697880 kubelet[3202]: E0625 16:28:40.697691 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.697880 kubelet[3202]: W0625 16:28:40.697703 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.698268 kubelet[3202]: E0625 16:28:40.698160 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.698268 kubelet[3202]: W0625 16:28:40.698173 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.698590 kubelet[3202]: E0625 16:28:40.698576 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.698689 kubelet[3202]: W0625 16:28:40.698677 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.698916 kubelet[3202]: E0625 16:28:40.698790 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.699064 kubelet[3202]: E0625 16:28:40.698618 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.699146 kubelet[3202]: E0625 16:28:40.698636 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.699347 kubelet[3202]: E0625 16:28:40.699337 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.699533 kubelet[3202]: W0625 16:28:40.699519 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.699625 kubelet[3202]: E0625 16:28:40.699616 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.701274 kubelet[3202]: E0625 16:28:40.701253 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.701274 kubelet[3202]: W0625 16:28:40.701273 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.701529 kubelet[3202]: E0625 16:28:40.701290 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.714534 kubelet[3202]: E0625 16:28:40.714505 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.714534 kubelet[3202]: W0625 16:28:40.714533 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.714721 kubelet[3202]: E0625 16:28:40.714575 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.715072 kubelet[3202]: E0625 16:28:40.715000 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.715072 kubelet[3202]: W0625 16:28:40.715016 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.715072 kubelet[3202]: E0625 16:28:40.715040 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.727067 kubelet[3202]: E0625 16:28:40.727045 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.727244 kubelet[3202]: W0625 16:28:40.727228 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.727337 kubelet[3202]: E0625 16:28:40.727326 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.730396 systemd[1]: Started cri-containerd-a236baad95bd94832343f2e68a0dac5bc1806f3c432a9592e4548b431c86453e.scope - libcontainer container a236baad95bd94832343f2e68a0dac5bc1806f3c432a9592e4548b431c86453e. Jun 25 16:28:40.737627 containerd[1802]: time="2024-06-25T16:28:40.737578003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mt2wn,Uid:4e153c31-2cce-478e-9190-130fbb441581,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:40.747000 audit: BPF prog-id=114 op=LOAD Jun 25 16:28:40.748000 audit: BPF prog-id=115 op=LOAD Jun 25 16:28:40.748000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3582 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132333662616164393562643934383332333433663265363861306461 Jun 25 16:28:40.748000 audit: BPF prog-id=116 op=LOAD Jun 25 16:28:40.748000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3582 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132333662616164393562643934383332333433663265363861306461 Jun 25 16:28:40.748000 audit: BPF prog-id=116 op=UNLOAD Jun 25 16:28:40.748000 audit: BPF prog-id=115 op=UNLOAD Jun 25 16:28:40.748000 audit: BPF prog-id=117 op=LOAD Jun 25 16:28:40.748000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3582 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132333662616164393562643934383332333433663265363861306461 Jun 25 16:28:40.794627 kubelet[3202]: E0625 16:28:40.793366 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.794627 kubelet[3202]: W0625 16:28:40.793391 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.794627 kubelet[3202]: E0625 16:28:40.793416 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.794627 kubelet[3202]: E0625 16:28:40.794021 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.794627 kubelet[3202]: W0625 16:28:40.794035 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.794627 kubelet[3202]: E0625 16:28:40.794058 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.794627 kubelet[3202]: E0625 16:28:40.794362 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.794627 kubelet[3202]: W0625 16:28:40.794372 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.794627 kubelet[3202]: E0625 16:28:40.794390 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.795395 kubelet[3202]: E0625 16:28:40.795223 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.795395 kubelet[3202]: W0625 16:28:40.795238 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.795395 kubelet[3202]: E0625 16:28:40.795260 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.795612 kubelet[3202]: E0625 16:28:40.795520 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.795612 kubelet[3202]: W0625 16:28:40.795532 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.795978 kubelet[3202]: E0625 16:28:40.795717 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.795978 kubelet[3202]: W0625 16:28:40.795729 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.795978 kubelet[3202]: E0625 16:28:40.795767 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.795978 kubelet[3202]: E0625 16:28:40.795847 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.796240 kubelet[3202]: E0625 16:28:40.796006 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.796240 kubelet[3202]: W0625 16:28:40.796016 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.796240 kubelet[3202]: E0625 16:28:40.796112 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.796377 kubelet[3202]: E0625 16:28:40.796263 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.796377 kubelet[3202]: W0625 16:28:40.796272 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.796377 kubelet[3202]: E0625 16:28:40.796360 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.796585 kubelet[3202]: E0625 16:28:40.796490 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.796585 kubelet[3202]: W0625 16:28:40.796500 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.796585 kubelet[3202]: E0625 16:28:40.796526 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.796754 kubelet[3202]: E0625 16:28:40.796739 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.796754 kubelet[3202]: W0625 16:28:40.796751 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.796877 kubelet[3202]: E0625 16:28:40.796770 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.797003 kubelet[3202]: E0625 16:28:40.796986 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.797003 kubelet[3202]: W0625 16:28:40.797003 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.797131 kubelet[3202]: E0625 16:28:40.797087 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.797623 kubelet[3202]: E0625 16:28:40.797256 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.797623 kubelet[3202]: W0625 16:28:40.797267 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.797623 kubelet[3202]: E0625 16:28:40.797372 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.797623 kubelet[3202]: E0625 16:28:40.797518 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.797623 kubelet[3202]: W0625 16:28:40.797526 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.797623 kubelet[3202]: E0625 16:28:40.797614 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.798152 kubelet[3202]: E0625 16:28:40.797739 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.798152 kubelet[3202]: W0625 16:28:40.797748 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.798152 kubelet[3202]: E0625 16:28:40.797928 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.798152 kubelet[3202]: E0625 16:28:40.798076 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.798152 kubelet[3202]: W0625 16:28:40.798086 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.798436 kubelet[3202]: E0625 16:28:40.798233 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.798436 kubelet[3202]: E0625 16:28:40.798368 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.798436 kubelet[3202]: W0625 16:28:40.798377 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.798436 kubelet[3202]: E0625 16:28:40.798396 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.798636 kubelet[3202]: E0625 16:28:40.798614 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.798636 kubelet[3202]: W0625 16:28:40.798623 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.798733 kubelet[3202]: E0625 16:28:40.798642 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.799884 kubelet[3202]: E0625 16:28:40.798850 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.799884 kubelet[3202]: W0625 16:28:40.798862 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.799884 kubelet[3202]: E0625 16:28:40.798945 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.799884 kubelet[3202]: E0625 16:28:40.799096 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.799884 kubelet[3202]: W0625 16:28:40.799104 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.799884 kubelet[3202]: E0625 16:28:40.799200 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.799884 kubelet[3202]: E0625 16:28:40.799366 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.799884 kubelet[3202]: W0625 16:28:40.799375 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.799884 kubelet[3202]: E0625 16:28:40.799466 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.799884 kubelet[3202]: E0625 16:28:40.799618 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.800526 kubelet[3202]: W0625 16:28:40.799628 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.804509 kubelet[3202]: E0625 16:28:40.801761 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.804509 kubelet[3202]: E0625 16:28:40.801980 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.804509 kubelet[3202]: W0625 16:28:40.801991 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.804509 kubelet[3202]: E0625 16:28:40.802009 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.804509 kubelet[3202]: E0625 16:28:40.802327 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.804509 kubelet[3202]: W0625 16:28:40.802337 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.804509 kubelet[3202]: E0625 16:28:40.802357 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.804509 kubelet[3202]: E0625 16:28:40.802584 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.804509 kubelet[3202]: W0625 16:28:40.802594 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.804509 kubelet[3202]: E0625 16:28:40.802613 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.805008 kubelet[3202]: E0625 16:28:40.803079 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.805008 kubelet[3202]: W0625 16:28:40.803089 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.805008 kubelet[3202]: E0625 16:28:40.803105 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.806279 containerd[1802]: time="2024-06-25T16:28:40.806115912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:40.806497 containerd[1802]: time="2024-06-25T16:28:40.806335125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:40.806585 containerd[1802]: time="2024-06-25T16:28:40.806481466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:40.806646 containerd[1802]: time="2024-06-25T16:28:40.806557634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:40.823584 kubelet[3202]: E0625 16:28:40.823464 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:40.823584 kubelet[3202]: W0625 16:28:40.823487 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:40.823584 kubelet[3202]: E0625 16:28:40.823515 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:40.839268 containerd[1802]: time="2024-06-25T16:28:40.839222224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866b59c96-4ft9s,Uid:fefd928b-79d8-4dcd-902d-9fbc418a0fc1,Namespace:calico-system,Attempt:0,} returns sandbox id \"a236baad95bd94832343f2e68a0dac5bc1806f3c432a9592e4548b431c86453e\"" Jun 25 16:28:40.847403 containerd[1802]: time="2024-06-25T16:28:40.846066766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:28:40.856962 systemd[1]: Started cri-containerd-1808e6a2f2dc204445dd3b6d15021fcd8c5a35400e5f239467bf93143bf9dbaf.scope - libcontainer container 1808e6a2f2dc204445dd3b6d15021fcd8c5a35400e5f239467bf93143bf9dbaf. Jun 25 16:28:40.914000 audit: BPF prog-id=118 op=LOAD Jun 25 16:28:40.915000 audit: BPF prog-id=119 op=LOAD Jun 25 16:28:40.915000 audit[3663]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3629 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303865366132663264633230343434356464336236643135303231 Jun 25 16:28:40.917000 audit: BPF prog-id=120 op=LOAD Jun 25 16:28:40.917000 audit[3663]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3629 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303865366132663264633230343434356464336236643135303231 Jun 25 16:28:40.918000 audit: BPF prog-id=120 op=UNLOAD Jun 25 16:28:40.918000 audit: BPF prog-id=119 op=UNLOAD Jun 25 16:28:40.918000 audit: BPF prog-id=121 op=LOAD Jun 25 16:28:40.918000 audit[3663]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3629 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:40.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303865366132663264633230343434356464336236643135303231 Jun 25 16:28:40.970856 containerd[1802]: time="2024-06-25T16:28:40.960478824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mt2wn,Uid:4e153c31-2cce-478e-9190-130fbb441581,Namespace:calico-system,Attempt:0,} returns sandbox id \"1808e6a2f2dc204445dd3b6d15021fcd8c5a35400e5f239467bf93143bf9dbaf\"" Jun 25 16:28:41.186000 audit[3697]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=3697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:41.186000 audit[3697]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd0b9b01b0 a2=0 a3=7ffd0b9b019c items=0 ppid=3352 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:41.186000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:41.187000 audit[3697]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:41.187000 audit[3697]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd0b9b01b0 a2=0 a3=0 items=0 ppid=3352 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:41.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:41.946316 kubelet[3202]: E0625 16:28:41.946278 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7s8f" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" Jun 25 16:28:43.947404 kubelet[3202]: E0625 16:28:43.947365 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7s8f" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" Jun 25 16:28:44.032968 containerd[1802]: time="2024-06-25T16:28:44.032914699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:44.036634 containerd[1802]: time="2024-06-25T16:28:44.036566994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:28:44.038999 containerd[1802]: time="2024-06-25T16:28:44.038958886Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:44.042318 containerd[1802]: time="2024-06-25T16:28:44.042274359Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:44.045700 containerd[1802]: time="2024-06-25T16:28:44.045154800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:44.048665 containerd[1802]: time="2024-06-25T16:28:44.048134577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.202017132s" Jun 25 16:28:44.048665 containerd[1802]: time="2024-06-25T16:28:44.048202383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:28:44.063745 containerd[1802]: time="2024-06-25T16:28:44.063272531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:28:44.077630 containerd[1802]: time="2024-06-25T16:28:44.077452142Z" level=info msg="CreateContainer within sandbox \"a236baad95bd94832343f2e68a0dac5bc1806f3c432a9592e4548b431c86453e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:28:44.119551 containerd[1802]: time="2024-06-25T16:28:44.119486883Z" level=info msg="CreateContainer within sandbox \"a236baad95bd94832343f2e68a0dac5bc1806f3c432a9592e4548b431c86453e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"53523824d5aac4fa390206998d4d81e29885add1eb499551ac1a34e991d80a94\"" Jun 25 16:28:44.121070 containerd[1802]: time="2024-06-25T16:28:44.121029868Z" level=info msg="StartContainer for \"53523824d5aac4fa390206998d4d81e29885add1eb499551ac1a34e991d80a94\"" Jun 25 16:28:44.221507 systemd[1]: Started cri-containerd-53523824d5aac4fa390206998d4d81e29885add1eb499551ac1a34e991d80a94.scope - libcontainer container 53523824d5aac4fa390206998d4d81e29885add1eb499551ac1a34e991d80a94. Jun 25 16:28:44.251228 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:28:44.251374 kernel: audit: type=1334 audit(1719332924.247:484): prog-id=122 op=LOAD Jun 25 16:28:44.251412 kernel: audit: type=1334 audit(1719332924.248:485): prog-id=123 op=LOAD Jun 25 16:28:44.251444 kernel: audit: type=1300 audit(1719332924.248:485): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3582 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:44.247000 audit: BPF prog-id=122 op=LOAD Jun 25 16:28:44.248000 audit: BPF prog-id=123 op=LOAD Jun 25 16:28:44.248000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3582 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:44.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533353233383234643561616334666133393032303639393864346438 Jun 25 16:28:44.255557 kernel: audit: type=1327 audit(1719332924.248:485): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533353233383234643561616334666133393032303639393864346438 Jun 25 16:28:44.248000 audit: BPF prog-id=124 op=LOAD Jun 25 16:28:44.256517 kernel: audit: type=1334 audit(1719332924.248:486): prog-id=124 op=LOAD Jun 25 16:28:44.248000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3582 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:44.259302 kernel: audit: type=1300 audit(1719332924.248:486): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3582 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:44.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533353233383234643561616334666133393032303639393864346438 Jun 25 16:28:44.262161 kernel: audit: type=1327 audit(1719332924.248:486): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533353233383234643561616334666133393032303639393864346438 Jun 25 16:28:44.248000 audit: BPF prog-id=124 op=UNLOAD Jun 25 16:28:44.248000 audit: BPF prog-id=123 op=UNLOAD Jun 25 16:28:44.264243 kernel: audit: type=1334 audit(1719332924.248:487): prog-id=124 op=UNLOAD Jun 25 16:28:44.264313 kernel: audit: type=1334 audit(1719332924.248:488): prog-id=123 op=UNLOAD Jun 25 16:28:44.248000 audit: BPF prog-id=125 op=LOAD Jun 25 16:28:44.265304 kernel: audit: type=1334 audit(1719332924.248:489): prog-id=125 op=LOAD Jun 25 16:28:44.248000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3582 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:44.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533353233383234643561616334666133393032303639393864346438 Jun 25 16:28:44.333102 containerd[1802]: time="2024-06-25T16:28:44.333050641Z" level=info msg="StartContainer for \"53523824d5aac4fa390206998d4d81e29885add1eb499551ac1a34e991d80a94\" returns successfully" Jun 25 16:28:45.174491 kubelet[3202]: E0625 16:28:45.174455 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.174491 kubelet[3202]: W0625 16:28:45.174482 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.175171 kubelet[3202]: E0625 16:28:45.174513 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.175171 kubelet[3202]: E0625 16:28:45.174816 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.175171 kubelet[3202]: W0625 16:28:45.174830 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.175171 kubelet[3202]: E0625 16:28:45.174850 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.175171 kubelet[3202]: E0625 16:28:45.175060 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.175171 kubelet[3202]: W0625 16:28:45.175069 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.175171 kubelet[3202]: E0625 16:28:45.175087 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.175577 kubelet[3202]: E0625 16:28:45.175297 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.175577 kubelet[3202]: W0625 16:28:45.175306 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.175577 kubelet[3202]: E0625 16:28:45.175321 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.175577 kubelet[3202]: E0625 16:28:45.175520 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.175577 kubelet[3202]: W0625 16:28:45.175528 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.175577 kubelet[3202]: E0625 16:28:45.175544 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.176019 kubelet[3202]: E0625 16:28:45.175728 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.176019 kubelet[3202]: W0625 16:28:45.175738 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.176019 kubelet[3202]: E0625 16:28:45.175756 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.176019 kubelet[3202]: E0625 16:28:45.175948 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.176019 kubelet[3202]: W0625 16:28:45.175958 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.176019 kubelet[3202]: E0625 16:28:45.175973 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.176360 kubelet[3202]: E0625 16:28:45.176154 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.176360 kubelet[3202]: W0625 16:28:45.176163 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.176360 kubelet[3202]: E0625 16:28:45.176177 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.176514 kubelet[3202]: E0625 16:28:45.176393 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.176514 kubelet[3202]: W0625 16:28:45.176402 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.176514 kubelet[3202]: E0625 16:28:45.176416 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.176656 kubelet[3202]: E0625 16:28:45.176596 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.176656 kubelet[3202]: W0625 16:28:45.176606 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.176656 kubelet[3202]: E0625 16:28:45.176620 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.176810 kubelet[3202]: E0625 16:28:45.176795 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.176810 kubelet[3202]: W0625 16:28:45.176803 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.176911 kubelet[3202]: E0625 16:28:45.176817 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.177015 kubelet[3202]: E0625 16:28:45.176996 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.177015 kubelet[3202]: W0625 16:28:45.177013 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.177163 kubelet[3202]: E0625 16:28:45.177029 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.177265 kubelet[3202]: E0625 16:28:45.177250 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.177265 kubelet[3202]: W0625 16:28:45.177265 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.181653 kubelet[3202]: E0625 16:28:45.177281 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.181716 kubelet[3202]: E0625 16:28:45.181691 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.181716 kubelet[3202]: W0625 16:28:45.181709 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.181806 kubelet[3202]: E0625 16:28:45.181735 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.182011 kubelet[3202]: E0625 16:28:45.181973 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.182011 kubelet[3202]: W0625 16:28:45.182001 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.182011 kubelet[3202]: E0625 16:28:45.182017 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.234826 kubelet[3202]: E0625 16:28:45.234790 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.234826 kubelet[3202]: W0625 16:28:45.234816 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.235368 kubelet[3202]: E0625 16:28:45.234847 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.235368 kubelet[3202]: E0625 16:28:45.235234 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.235368 kubelet[3202]: W0625 16:28:45.235247 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.235368 kubelet[3202]: E0625 16:28:45.235275 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.235672 kubelet[3202]: E0625 16:28:45.235515 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.235672 kubelet[3202]: W0625 16:28:45.235525 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.235672 kubelet[3202]: E0625 16:28:45.235546 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.236020 kubelet[3202]: E0625 16:28:45.235900 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.236020 kubelet[3202]: W0625 16:28:45.235912 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.236020 kubelet[3202]: E0625 16:28:45.235934 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.236428 kubelet[3202]: E0625 16:28:45.236407 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.236428 kubelet[3202]: W0625 16:28:45.236424 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.236752 kubelet[3202]: E0625 16:28:45.236461 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.236827 kubelet[3202]: E0625 16:28:45.236759 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.236827 kubelet[3202]: W0625 16:28:45.236780 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.237032 kubelet[3202]: E0625 16:28:45.236900 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.237032 kubelet[3202]: E0625 16:28:45.237016 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.237032 kubelet[3202]: W0625 16:28:45.237025 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.237504 kubelet[3202]: E0625 16:28:45.237136 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.237667 kubelet[3202]: E0625 16:28:45.237602 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.238005 kubelet[3202]: W0625 16:28:45.237668 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.238005 kubelet[3202]: E0625 16:28:45.237831 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.238005 kubelet[3202]: E0625 16:28:45.237998 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.238239 kubelet[3202]: W0625 16:28:45.238009 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.238239 kubelet[3202]: E0625 16:28:45.238031 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.238536 kubelet[3202]: E0625 16:28:45.238485 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.238536 kubelet[3202]: W0625 16:28:45.238497 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.238536 kubelet[3202]: E0625 16:28:45.238519 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.238930 kubelet[3202]: E0625 16:28:45.238752 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.238930 kubelet[3202]: W0625 16:28:45.238762 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.238930 kubelet[3202]: E0625 16:28:45.238778 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.239277 kubelet[3202]: E0625 16:28:45.239155 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.239277 kubelet[3202]: W0625 16:28:45.239166 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.239277 kubelet[3202]: E0625 16:28:45.239267 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.239861 kubelet[3202]: E0625 16:28:45.239842 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.240037 kubelet[3202]: W0625 16:28:45.239867 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.240338 kubelet[3202]: E0625 16:28:45.240081 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.240338 kubelet[3202]: E0625 16:28:45.240253 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.240338 kubelet[3202]: W0625 16:28:45.240266 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.240338 kubelet[3202]: E0625 16:28:45.240296 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.240735 kubelet[3202]: E0625 16:28:45.240483 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.240735 kubelet[3202]: W0625 16:28:45.240493 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.240735 kubelet[3202]: E0625 16:28:45.240523 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.240983 kubelet[3202]: E0625 16:28:45.240740 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.240983 kubelet[3202]: W0625 16:28:45.240752 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.240983 kubelet[3202]: E0625 16:28:45.240768 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.241123 kubelet[3202]: E0625 16:28:45.241012 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.241123 kubelet[3202]: W0625 16:28:45.241022 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.241123 kubelet[3202]: E0625 16:28:45.241037 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.244577 kubelet[3202]: E0625 16:28:45.244555 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:45.244577 kubelet[3202]: W0625 16:28:45.244572 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:45.244701 kubelet[3202]: E0625 16:28:45.244591 3202 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:45.895545 containerd[1802]: time="2024-06-25T16:28:45.895460996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:45.897896 containerd[1802]: time="2024-06-25T16:28:45.897828504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:28:45.899883 containerd[1802]: time="2024-06-25T16:28:45.899848908Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:45.906317 containerd[1802]: time="2024-06-25T16:28:45.906146538Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:45.910993 containerd[1802]: time="2024-06-25T16:28:45.910950449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:45.912710 containerd[1802]: time="2024-06-25T16:28:45.912664197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.849248002s" Jun 25 16:28:45.913047 containerd[1802]: time="2024-06-25T16:28:45.912847969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:28:45.918854 containerd[1802]: time="2024-06-25T16:28:45.918801903Z" level=info msg="CreateContainer within sandbox \"1808e6a2f2dc204445dd3b6d15021fcd8c5a35400e5f239467bf93143bf9dbaf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:28:45.946949 kubelet[3202]: E0625 16:28:45.946588 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7s8f" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" Jun 25 16:28:45.951497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881430145.mount: Deactivated successfully. Jun 25 16:28:45.972425 containerd[1802]: time="2024-06-25T16:28:45.972239968Z" level=info msg="CreateContainer within sandbox \"1808e6a2f2dc204445dd3b6d15021fcd8c5a35400e5f239467bf93143bf9dbaf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b6730b5cb2694f87e71cb7984464b5a4f727a2b5171fd05377f3cca01bb8e874\"" Jun 25 16:28:45.974548 containerd[1802]: time="2024-06-25T16:28:45.974502075Z" level=info msg="StartContainer for \"b6730b5cb2694f87e71cb7984464b5a4f727a2b5171fd05377f3cca01bb8e874\"" Jun 25 16:28:46.062232 systemd[1]: Started cri-containerd-b6730b5cb2694f87e71cb7984464b5a4f727a2b5171fd05377f3cca01bb8e874.scope - libcontainer container b6730b5cb2694f87e71cb7984464b5a4f727a2b5171fd05377f3cca01bb8e874. Jun 25 16:28:46.105400 kubelet[3202]: I0625 16:28:46.105365 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:28:46.132000 audit: BPF prog-id=126 op=LOAD Jun 25 16:28:46.132000 audit[3789]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3629 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373330623563623236393466383765373163623739383434363462 Jun 25 16:28:46.133000 audit: BPF prog-id=127 op=LOAD Jun 25 16:28:46.133000 audit[3789]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3629 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373330623563623236393466383765373163623739383434363462 Jun 25 16:28:46.133000 audit: BPF prog-id=127 op=UNLOAD Jun 25 16:28:46.133000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:28:46.133000 audit: BPF prog-id=128 op=LOAD Jun 25 16:28:46.133000 audit[3789]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3629 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373330623563623236393466383765373163623739383434363462 Jun 25 16:28:46.167029 containerd[1802]: time="2024-06-25T16:28:46.166878983Z" level=info msg="StartContainer for \"b6730b5cb2694f87e71cb7984464b5a4f727a2b5171fd05377f3cca01bb8e874\" returns successfully" Jun 25 16:28:46.183278 systemd[1]: cri-containerd-b6730b5cb2694f87e71cb7984464b5a4f727a2b5171fd05377f3cca01bb8e874.scope: Deactivated successfully. Jun 25 16:28:46.187000 audit: BPF prog-id=128 op=UNLOAD Jun 25 16:28:46.222639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6730b5cb2694f87e71cb7984464b5a4f727a2b5171fd05377f3cca01bb8e874-rootfs.mount: Deactivated successfully. Jun 25 16:28:46.363460 containerd[1802]: time="2024-06-25T16:28:46.309536770Z" level=info msg="shim disconnected" id=b6730b5cb2694f87e71cb7984464b5a4f727a2b5171fd05377f3cca01bb8e874 namespace=k8s.io Jun 25 16:28:46.363734 containerd[1802]: time="2024-06-25T16:28:46.363460932Z" level=warning msg="cleaning up after shim disconnected" id=b6730b5cb2694f87e71cb7984464b5a4f727a2b5171fd05377f3cca01bb8e874 namespace=k8s.io Jun 25 16:28:46.363734 containerd[1802]: time="2024-06-25T16:28:46.363482743Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:28:47.112276 containerd[1802]: time="2024-06-25T16:28:47.111619993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:28:47.136712 kubelet[3202]: I0625 16:28:47.136600 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-866b59c96-4ft9s" podStartSLOduration=3.9320274079999997 podStartE2EDuration="7.136551161s" podCreationTimestamp="2024-06-25 16:28:40 +0000 UTC" firstStartedPulling="2024-06-25 16:28:40.845429845 +0000 UTC m=+20.137662113" lastFinishedPulling="2024-06-25 16:28:44.049953609 +0000 UTC m=+23.342185866" observedRunningTime="2024-06-25 16:28:45.086654097 +0000 UTC m=+24.378886374" watchObservedRunningTime="2024-06-25 16:28:47.136551161 +0000 UTC m=+26.428783438" Jun 25 16:28:47.946687 kubelet[3202]: E0625 16:28:47.946634 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7s8f" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" Jun 25 16:28:49.947081 kubelet[3202]: E0625 16:28:49.947040 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7s8f" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" Jun 25 16:28:51.946882 kubelet[3202]: E0625 16:28:51.946843 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7s8f" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" Jun 25 16:28:52.372242 containerd[1802]: time="2024-06-25T16:28:52.372175882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:52.374139 containerd[1802]: time="2024-06-25T16:28:52.374079561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:28:52.376290 containerd[1802]: time="2024-06-25T16:28:52.376247937Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:52.379311 containerd[1802]: time="2024-06-25T16:28:52.379263124Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:52.386333 containerd[1802]: time="2024-06-25T16:28:52.386286120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:52.389160 containerd[1802]: time="2024-06-25T16:28:52.389071431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.277361325s" Jun 25 16:28:52.389410 containerd[1802]: time="2024-06-25T16:28:52.389167418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:28:52.397142 containerd[1802]: time="2024-06-25T16:28:52.397094848Z" level=info msg="CreateContainer within sandbox \"1808e6a2f2dc204445dd3b6d15021fcd8c5a35400e5f239467bf93143bf9dbaf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:28:52.419246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491987433.mount: Deactivated successfully. Jun 25 16:28:52.430691 containerd[1802]: time="2024-06-25T16:28:52.430636276Z" level=info msg="CreateContainer within sandbox \"1808e6a2f2dc204445dd3b6d15021fcd8c5a35400e5f239467bf93143bf9dbaf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2\"" Jun 25 16:28:52.432925 containerd[1802]: time="2024-06-25T16:28:52.431626751Z" level=info msg="StartContainer for \"29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2\"" Jun 25 16:28:52.519353 systemd[1]: run-containerd-runc-k8s.io-29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2-runc.Bn5Izq.mount: Deactivated successfully. Jun 25 16:28:52.528440 systemd[1]: Started cri-containerd-29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2.scope - libcontainer container 29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2. Jun 25 16:28:52.549244 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 16:28:52.549381 kernel: audit: type=1334 audit(1719332932.545:496): prog-id=129 op=LOAD Jun 25 16:28:52.549428 kernel: audit: type=1300 audit(1719332932.545:496): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3629 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.545000 audit: BPF prog-id=129 op=LOAD Jun 25 16:28:52.545000 audit[3863]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3629 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663231663561633336383664333134636132666534313766323539 Jun 25 16:28:52.552557 kernel: audit: type=1327 audit(1719332932.545:496): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663231663561633336383664333134636132666534313766323539 Jun 25 16:28:52.545000 audit: BPF prog-id=130 op=LOAD Jun 25 16:28:52.555261 kernel: audit: type=1334 audit(1719332932.545:497): prog-id=130 op=LOAD Jun 25 16:28:52.545000 audit[3863]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3629 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663231663561633336383664333134636132666534313766323539 Jun 25 16:28:52.565417 kernel: audit: type=1300 audit(1719332932.545:497): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3629 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.565533 kernel: audit: type=1327 audit(1719332932.545:497): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663231663561633336383664333134636132666534313766323539 Jun 25 16:28:52.545000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:28:52.566508 kernel: audit: type=1334 audit(1719332932.545:498): prog-id=130 op=UNLOAD Jun 25 16:28:52.545000 audit: BPF prog-id=129 op=UNLOAD Jun 25 16:28:52.567364 kernel: audit: type=1334 audit(1719332932.545:499): prog-id=129 op=UNLOAD Jun 25 16:28:52.545000 audit: BPF prog-id=131 op=LOAD Jun 25 16:28:52.574114 kernel: audit: type=1334 audit(1719332932.545:500): prog-id=131 op=LOAD Jun 25 16:28:52.574210 kernel: audit: type=1300 audit(1719332932.545:500): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3629 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.545000 audit[3863]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3629 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663231663561633336383664333134636132666534313766323539 Jun 25 16:28:52.587977 containerd[1802]: time="2024-06-25T16:28:52.587919631Z" level=info msg="StartContainer for \"29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2\" returns successfully" Jun 25 16:28:53.698230 systemd[1]: cri-containerd-29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2.scope: Deactivated successfully. Jun 25 16:28:53.702000 audit: BPF prog-id=131 op=UNLOAD Jun 25 16:28:53.733962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2-rootfs.mount: Deactivated successfully. Jun 25 16:28:53.763440 kubelet[3202]: I0625 16:28:53.763276 3202 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 16:28:53.798303 kubelet[3202]: I0625 16:28:53.798261 3202 topology_manager.go:215] "Topology Admit Handler" podUID="12cf6b5e-7d0a-4601-b457-20c98f952e2c" podNamespace="kube-system" podName="coredns-76f75df574-xjl2t" Jun 25 16:28:53.811862 systemd[1]: Created slice kubepods-burstable-pod12cf6b5e_7d0a_4601_b457_20c98f952e2c.slice - libcontainer container kubepods-burstable-pod12cf6b5e_7d0a_4601_b457_20c98f952e2c.slice. Jun 25 16:28:53.817870 kubelet[3202]: I0625 16:28:53.817835 3202 topology_manager.go:215] "Topology Admit Handler" podUID="1f02df65-1d0e-4c87-89a7-023877ca1122" podNamespace="calico-system" podName="calico-kube-controllers-796c95c4bb-vmlqt" Jun 25 16:28:53.818329 kubelet[3202]: I0625 16:28:53.818306 3202 topology_manager.go:215] "Topology Admit Handler" podUID="7ff87323-7adf-489a-b448-aa87f84c2db0" podNamespace="kube-system" podName="coredns-76f75df574-552qj" Jun 25 16:28:53.830105 systemd[1]: Created slice kubepods-burstable-pod7ff87323_7adf_489a_b448_aa87f84c2db0.slice - libcontainer container kubepods-burstable-pod7ff87323_7adf_489a_b448_aa87f84c2db0.slice. Jun 25 16:28:53.846054 systemd[1]: Created slice kubepods-besteffort-pod1f02df65_1d0e_4c87_89a7_023877ca1122.slice - libcontainer container kubepods-besteffort-pod1f02df65_1d0e_4c87_89a7_023877ca1122.slice. Jun 25 16:28:53.904319 kubelet[3202]: I0625 16:28:53.904273 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f02df65-1d0e-4c87-89a7-023877ca1122-tigera-ca-bundle\") pod \"calico-kube-controllers-796c95c4bb-vmlqt\" (UID: \"1f02df65-1d0e-4c87-89a7-023877ca1122\") " pod="calico-system/calico-kube-controllers-796c95c4bb-vmlqt" Jun 25 16:28:53.908685 kubelet[3202]: I0625 16:28:53.904575 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j68rl\" (UniqueName: \"kubernetes.io/projected/7ff87323-7adf-489a-b448-aa87f84c2db0-kube-api-access-j68rl\") pod \"coredns-76f75df574-552qj\" (UID: \"7ff87323-7adf-489a-b448-aa87f84c2db0\") " pod="kube-system/coredns-76f75df574-552qj" Jun 25 16:28:53.908685 kubelet[3202]: I0625 16:28:53.904640 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhxw\" (UniqueName: \"kubernetes.io/projected/1f02df65-1d0e-4c87-89a7-023877ca1122-kube-api-access-xbhxw\") pod \"calico-kube-controllers-796c95c4bb-vmlqt\" (UID: \"1f02df65-1d0e-4c87-89a7-023877ca1122\") " pod="calico-system/calico-kube-controllers-796c95c4bb-vmlqt" Jun 25 16:28:53.908685 kubelet[3202]: I0625 16:28:53.904672 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshmh\" (UniqueName: \"kubernetes.io/projected/12cf6b5e-7d0a-4601-b457-20c98f952e2c-kube-api-access-wshmh\") pod \"coredns-76f75df574-xjl2t\" (UID: \"12cf6b5e-7d0a-4601-b457-20c98f952e2c\") " pod="kube-system/coredns-76f75df574-xjl2t" Jun 25 16:28:53.908685 kubelet[3202]: I0625 16:28:53.904697 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ff87323-7adf-489a-b448-aa87f84c2db0-config-volume\") pod \"coredns-76f75df574-552qj\" (UID: \"7ff87323-7adf-489a-b448-aa87f84c2db0\") " pod="kube-system/coredns-76f75df574-552qj" Jun 25 16:28:53.908685 kubelet[3202]: I0625 16:28:53.904732 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12cf6b5e-7d0a-4601-b457-20c98f952e2c-config-volume\") pod \"coredns-76f75df574-xjl2t\" (UID: \"12cf6b5e-7d0a-4601-b457-20c98f952e2c\") " pod="kube-system/coredns-76f75df574-xjl2t" Jun 25 16:28:53.914168 containerd[1802]: time="2024-06-25T16:28:53.914102166Z" level=info msg="shim disconnected" id=29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2 namespace=k8s.io Jun 25 16:28:53.914168 containerd[1802]: time="2024-06-25T16:28:53.914163435Z" level=warning msg="cleaning up after shim disconnected" id=29f21f5ac3686d314ca2fe417f25985374da15ae0288cedc6a9af54356b21af2 namespace=k8s.io Jun 25 16:28:53.914168 containerd[1802]: time="2024-06-25T16:28:53.914175412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:28:53.930731 containerd[1802]: time="2024-06-25T16:28:53.930672769Z" level=warning msg="cleanup warnings time=\"2024-06-25T16:28:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 16:28:53.954205 systemd[1]: Created slice kubepods-besteffort-podcb1b6fee_76dd_4b53_8bcb_f17d750a370e.slice - libcontainer container kubepods-besteffort-podcb1b6fee_76dd_4b53_8bcb_f17d750a370e.slice. Jun 25 16:28:53.960917 containerd[1802]: time="2024-06-25T16:28:53.960873964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f7s8f,Uid:cb1b6fee-76dd-4b53-8bcb-f17d750a370e,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:54.124921 containerd[1802]: time="2024-06-25T16:28:54.124870289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xjl2t,Uid:12cf6b5e-7d0a-4601-b457-20c98f952e2c,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:54.144508 containerd[1802]: time="2024-06-25T16:28:54.144462892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-552qj,Uid:7ff87323-7adf-489a-b448-aa87f84c2db0,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:54.149813 containerd[1802]: time="2024-06-25T16:28:54.149767772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796c95c4bb-vmlqt,Uid:1f02df65-1d0e-4c87-89a7-023877ca1122,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:54.177699 containerd[1802]: time="2024-06-25T16:28:54.177622929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:28:54.401887 containerd[1802]: time="2024-06-25T16:28:54.401801044Z" level=error msg="Failed to destroy network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.402693 containerd[1802]: time="2024-06-25T16:28:54.402545480Z" level=error msg="encountered an error cleaning up failed sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.402932 containerd[1802]: time="2024-06-25T16:28:54.402879300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xjl2t,Uid:12cf6b5e-7d0a-4601-b457-20c98f952e2c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.403749 kubelet[3202]: E0625 16:28:54.403351 3202 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.403749 kubelet[3202]: E0625 16:28:54.403429 3202 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xjl2t" Jun 25 16:28:54.403749 kubelet[3202]: E0625 16:28:54.403459 3202 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xjl2t" Jun 25 16:28:54.403966 kubelet[3202]: E0625 16:28:54.403542 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xjl2t_kube-system(12cf6b5e-7d0a-4601-b457-20c98f952e2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xjl2t_kube-system(12cf6b5e-7d0a-4601-b457-20c98f952e2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xjl2t" podUID="12cf6b5e-7d0a-4601-b457-20c98f952e2c" Jun 25 16:28:54.408893 containerd[1802]: time="2024-06-25T16:28:54.408804816Z" level=error msg="Failed to destroy network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.409926 containerd[1802]: time="2024-06-25T16:28:54.409857944Z" level=error msg="encountered an error cleaning up failed sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.410169 containerd[1802]: time="2024-06-25T16:28:54.410117777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f7s8f,Uid:cb1b6fee-76dd-4b53-8bcb-f17d750a370e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.410623 kubelet[3202]: E0625 16:28:54.410540 3202 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.410831 kubelet[3202]: E0625 16:28:54.410772 3202 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f7s8f" Jun 25 16:28:54.410831 kubelet[3202]: E0625 16:28:54.410810 3202 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f7s8f" Jun 25 16:28:54.410946 kubelet[3202]: E0625 16:28:54.410890 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f7s8f_calico-system(cb1b6fee-76dd-4b53-8bcb-f17d750a370e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f7s8f_calico-system(cb1b6fee-76dd-4b53-8bcb-f17d750a370e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f7s8f" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" Jun 25 16:28:54.448143 containerd[1802]: time="2024-06-25T16:28:54.448072591Z" level=error msg="Failed to destroy network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.448634 containerd[1802]: time="2024-06-25T16:28:54.448582090Z" level=error msg="encountered an error cleaning up failed sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.448818 containerd[1802]: time="2024-06-25T16:28:54.448666849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796c95c4bb-vmlqt,Uid:1f02df65-1d0e-4c87-89a7-023877ca1122,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.449132 kubelet[3202]: E0625 16:28:54.449098 3202 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.449232 kubelet[3202]: E0625 16:28:54.449176 3202 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-796c95c4bb-vmlqt" Jun 25 16:28:54.449297 kubelet[3202]: E0625 16:28:54.449240 3202 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-796c95c4bb-vmlqt" Jun 25 16:28:54.451601 kubelet[3202]: E0625 16:28:54.451573 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-796c95c4bb-vmlqt_calico-system(1f02df65-1d0e-4c87-89a7-023877ca1122)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-796c95c4bb-vmlqt_calico-system(1f02df65-1d0e-4c87-89a7-023877ca1122)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-796c95c4bb-vmlqt" podUID="1f02df65-1d0e-4c87-89a7-023877ca1122" Jun 25 16:28:54.458816 containerd[1802]: time="2024-06-25T16:28:54.458760521Z" level=error msg="Failed to destroy network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.459309 containerd[1802]: time="2024-06-25T16:28:54.459261751Z" level=error msg="encountered an error cleaning up failed sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.459413 containerd[1802]: time="2024-06-25T16:28:54.459329440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-552qj,Uid:7ff87323-7adf-489a-b448-aa87f84c2db0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.459733 kubelet[3202]: E0625 16:28:54.459706 3202 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:54.459895 kubelet[3202]: E0625 16:28:54.459776 3202 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-552qj" Jun 25 16:28:54.459895 kubelet[3202]: E0625 16:28:54.459808 3202 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-552qj" Jun 25 16:28:54.459895 kubelet[3202]: E0625 16:28:54.459882 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-552qj_kube-system(7ff87323-7adf-489a-b448-aa87f84c2db0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-552qj_kube-system(7ff87323-7adf-489a-b448-aa87f84c2db0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-552qj" podUID="7ff87323-7adf-489a-b448-aa87f84c2db0" Jun 25 16:28:54.741660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a-shm.mount: Deactivated successfully. Jun 25 16:28:55.152867 kubelet[3202]: I0625 16:28:55.152826 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:28:55.157869 kubelet[3202]: I0625 16:28:55.156304 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:28:55.158257 containerd[1802]: time="2024-06-25T16:28:55.153826025Z" level=info msg="StopPodSandbox for \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\"" Jun 25 16:28:55.170687 kubelet[3202]: I0625 16:28:55.168369 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:28:55.171865 containerd[1802]: time="2024-06-25T16:28:55.158013359Z" level=info msg="StopPodSandbox for \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\"" Jun 25 16:28:55.174789 containerd[1802]: time="2024-06-25T16:28:55.174347740Z" level=info msg="Ensure that sandbox e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2 in task-service has been cleanup successfully" Jun 25 16:28:55.175200 containerd[1802]: time="2024-06-25T16:28:55.171711132Z" level=info msg="Ensure that sandbox 61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a in task-service has been cleanup successfully" Jun 25 16:28:55.175824 containerd[1802]: time="2024-06-25T16:28:55.175780783Z" level=info msg="StopPodSandbox for \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\"" Jun 25 16:28:55.176044 containerd[1802]: time="2024-06-25T16:28:55.176018950Z" level=info msg="Ensure that sandbox 0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7 in task-service has been cleanup successfully" Jun 25 16:28:55.184296 kubelet[3202]: I0625 16:28:55.181680 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:28:55.193817 containerd[1802]: time="2024-06-25T16:28:55.193734443Z" level=info msg="StopPodSandbox for \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\"" Jun 25 16:28:55.194808 containerd[1802]: time="2024-06-25T16:28:55.194778580Z" level=info msg="Ensure that sandbox 6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc in task-service has been cleanup successfully" Jun 25 16:28:55.337553 containerd[1802]: time="2024-06-25T16:28:55.337467771Z" level=error msg="StopPodSandbox for \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\" failed" error="failed to destroy network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:55.337821 kubelet[3202]: E0625 16:28:55.337797 3202 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:28:55.337935 kubelet[3202]: E0625 16:28:55.337889 3202 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a"} Jun 25 16:28:55.338009 kubelet[3202]: E0625 16:28:55.337992 3202 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb1b6fee-76dd-4b53-8bcb-f17d750a370e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:55.338126 kubelet[3202]: E0625 16:28:55.338067 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb1b6fee-76dd-4b53-8bcb-f17d750a370e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f7s8f" podUID="cb1b6fee-76dd-4b53-8bcb-f17d750a370e" Jun 25 16:28:55.339802 containerd[1802]: time="2024-06-25T16:28:55.339745422Z" level=error msg="StopPodSandbox for \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\" failed" error="failed to destroy network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:55.340023 kubelet[3202]: E0625 16:28:55.339988 3202 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:28:55.340358 kubelet[3202]: E0625 16:28:55.340033 3202 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc"} Jun 25 16:28:55.340358 kubelet[3202]: E0625 16:28:55.340273 3202 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12cf6b5e-7d0a-4601-b457-20c98f952e2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:55.340358 kubelet[3202]: E0625 16:28:55.340323 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12cf6b5e-7d0a-4601-b457-20c98f952e2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xjl2t" podUID="12cf6b5e-7d0a-4601-b457-20c98f952e2c" Jun 25 16:28:55.341961 containerd[1802]: time="2024-06-25T16:28:55.341900784Z" level=error msg="StopPodSandbox for \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\" failed" error="failed to destroy network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:55.342397 kubelet[3202]: E0625 16:28:55.342129 3202 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:28:55.342397 kubelet[3202]: E0625 16:28:55.342161 3202 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2"} Jun 25 16:28:55.342397 kubelet[3202]: E0625 16:28:55.342311 3202 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f02df65-1d0e-4c87-89a7-023877ca1122\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:55.342397 kubelet[3202]: E0625 16:28:55.342354 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f02df65-1d0e-4c87-89a7-023877ca1122\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-796c95c4bb-vmlqt" podUID="1f02df65-1d0e-4c87-89a7-023877ca1122" Jun 25 16:28:55.342942 containerd[1802]: time="2024-06-25T16:28:55.342895514Z" level=error msg="StopPodSandbox for \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\" failed" error="failed to destroy network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:55.343298 kubelet[3202]: E0625 16:28:55.343279 3202 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:28:55.343391 kubelet[3202]: E0625 16:28:55.343313 3202 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7"} Jun 25 16:28:55.343391 kubelet[3202]: E0625 16:28:55.343356 3202 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ff87323-7adf-489a-b448-aa87f84c2db0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:55.343519 kubelet[3202]: E0625 16:28:55.343399 3202 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ff87323-7adf-489a-b448-aa87f84c2db0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-552qj" podUID="7ff87323-7adf-489a-b448-aa87f84c2db0" Jun 25 16:28:58.147230 kubelet[3202]: I0625 16:28:58.147029 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:28:58.257000 audit[4134]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=4134 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:58.261011 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:28:58.261142 kernel: audit: type=1325 audit(1719332938.257:502): table=filter:95 family=2 entries=15 op=nft_register_rule pid=4134 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:58.261326 kernel: audit: type=1300 audit(1719332938.257:502): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd80abab50 a2=0 a3=7ffd80abab3c items=0 ppid=3352 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:58.257000 audit[4134]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd80abab50 a2=0 a3=7ffd80abab3c items=0 ppid=3352 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:58.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:58.270358 kernel: audit: type=1327 audit(1719332938.257:502): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:58.266000 audit[4134]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=4134 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:58.266000 audit[4134]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd80abab50 a2=0 a3=7ffd80abab3c items=0 ppid=3352 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:58.278934 kernel: audit: type=1325 audit(1719332938.266:503): table=nat:96 family=2 entries=19 op=nft_register_chain pid=4134 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:58.279043 kernel: audit: type=1300 audit(1719332938.266:503): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd80abab50 a2=0 a3=7ffd80abab3c items=0 ppid=3352 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:58.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:58.285210 kernel: audit: type=1327 audit(1719332938.266:503): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:02.952619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130013006.mount: Deactivated successfully. Jun 25 16:29:03.031203 containerd[1802]: time="2024-06-25T16:29:03.031042842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:03.033703 containerd[1802]: time="2024-06-25T16:29:03.033629505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:29:03.036838 containerd[1802]: time="2024-06-25T16:29:03.036765605Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:03.047142 containerd[1802]: time="2024-06-25T16:29:03.047094198Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:03.062359 containerd[1802]: time="2024-06-25T16:29:03.062300097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:03.069556 containerd[1802]: time="2024-06-25T16:29:03.069503535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.891609743s" Jun 25 16:29:03.069896 containerd[1802]: time="2024-06-25T16:29:03.069866752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:29:03.114855 containerd[1802]: time="2024-06-25T16:29:03.102666339Z" level=info msg="CreateContainer within sandbox \"1808e6a2f2dc204445dd3b6d15021fcd8c5a35400e5f239467bf93143bf9dbaf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:29:03.153397 containerd[1802]: time="2024-06-25T16:29:03.153347480Z" level=info msg="CreateContainer within sandbox \"1808e6a2f2dc204445dd3b6d15021fcd8c5a35400e5f239467bf93143bf9dbaf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4e19be48daa643b128158c3525d15503df222db4c04a1d9e678b013ede6dede7\"" Jun 25 16:29:03.155616 containerd[1802]: time="2024-06-25T16:29:03.154144241Z" level=info msg="StartContainer for \"4e19be48daa643b128158c3525d15503df222db4c04a1d9e678b013ede6dede7\"" Jun 25 16:29:03.218448 systemd[1]: Started cri-containerd-4e19be48daa643b128158c3525d15503df222db4c04a1d9e678b013ede6dede7.scope - libcontainer container 4e19be48daa643b128158c3525d15503df222db4c04a1d9e678b013ede6dede7. Jun 25 16:29:03.245000 audit: BPF prog-id=132 op=LOAD Jun 25 16:29:03.249461 kernel: audit: type=1334 audit(1719332943.245:504): prog-id=132 op=LOAD Jun 25 16:29:03.249551 kernel: audit: type=1300 audit(1719332943.245:504): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3629 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:03.245000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3629 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:03.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465313962653438646161363433623132383135386333353235643135 Jun 25 16:29:03.253433 kernel: audit: type=1327 audit(1719332943.245:504): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465313962653438646161363433623132383135386333353235643135 Jun 25 16:29:03.253547 kernel: audit: type=1334 audit(1719332943.249:505): prog-id=133 op=LOAD Jun 25 16:29:03.249000 audit: BPF prog-id=133 op=LOAD Jun 25 16:29:03.249000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3629 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:03.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465313962653438646161363433623132383135386333353235643135 Jun 25 16:29:03.249000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:29:03.249000 audit: BPF prog-id=132 op=UNLOAD Jun 25 16:29:03.249000 audit: BPF prog-id=134 op=LOAD Jun 25 16:29:03.249000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3629 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:03.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465313962653438646161363433623132383135386333353235643135 Jun 25 16:29:03.281681 containerd[1802]: time="2024-06-25T16:29:03.281629611Z" level=info msg="StartContainer for \"4e19be48daa643b128158c3525d15503df222db4c04a1d9e678b013ede6dede7\" returns successfully" Jun 25 16:29:03.404227 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:29:03.404356 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:29:04.296158 kubelet[3202]: I0625 16:29:04.296123 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-mt2wn" podStartSLOduration=2.188127346 podStartE2EDuration="24.286379559s" podCreationTimestamp="2024-06-25 16:28:40 +0000 UTC" firstStartedPulling="2024-06-25 16:28:40.972118441 +0000 UTC m=+20.264350698" lastFinishedPulling="2024-06-25 16:29:03.070370653 +0000 UTC m=+42.362602911" observedRunningTime="2024-06-25 16:29:04.283457831 +0000 UTC m=+43.575690105" watchObservedRunningTime="2024-06-25 16:29:04.286379559 +0000 UTC m=+43.578611835" Jun 25 16:29:04.419126 systemd[1]: run-containerd-runc-k8s.io-4e19be48daa643b128158c3525d15503df222db4c04a1d9e678b013ede6dede7-runc.Tr1kEp.mount: Deactivated successfully. Jun 25 16:29:04.892758 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:29:04.893003 kernel: audit: type=1400 audit(1719332944.889:509): avc: denied { write } for pid=4263 comm="tee" name="fd" dev="proc" ino=27013 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:04.889000 audit[4263]: AVC avc: denied { write } for pid=4263 comm="tee" name="fd" dev="proc" ino=27013 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:04.889000 audit[4263]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc72fca1a a2=241 a3=1b6 items=1 ppid=4231 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.903902 kernel: audit: type=1300 audit(1719332944.889:509): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc72fca1a a2=241 a3=1b6 items=1 ppid=4231 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.904013 kernel: audit: type=1307 audit(1719332944.889:509): cwd="/etc/service/enabled/cni/log" Jun 25 16:29:04.889000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:29:04.889000 audit: PATH item=0 name="/dev/fd/63" inode=26027 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:04.907231 kernel: audit: type=1302 audit(1719332944.889:509): item=0 name="/dev/fd/63" inode=26027 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:04.889000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:04.912044 kernel: audit: type=1327 audit(1719332944.889:509): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:04.921000 audit[4267]: AVC avc: denied { write } for pid=4267 comm="tee" name="fd" dev="proc" ino=27027 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:04.924214 kernel: audit: type=1400 audit(1719332944.921:510): avc: denied { write } for pid=4267 comm="tee" name="fd" dev="proc" ino=27027 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:04.922000 audit[4281]: AVC avc: denied { write } for pid=4281 comm="tee" name="fd" dev="proc" ino=26046 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:04.930376 kernel: audit: type=1400 audit(1719332944.922:511): avc: denied { write } for pid=4281 comm="tee" name="fd" dev="proc" ino=26046 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:04.930511 kernel: audit: type=1300 audit(1719332944.922:511): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4e07aa09 a2=241 a3=1b6 items=1 ppid=4241 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.922000 audit[4281]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4e07aa09 a2=241 a3=1b6 items=1 ppid=4241 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.922000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:29:04.934777 kernel: audit: type=1307 audit(1719332944.922:511): cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:29:04.934887 kernel: audit: type=1302 audit(1719332944.922:511): item=0 name="/dev/fd/63" inode=27016 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:04.922000 audit: PATH item=0 name="/dev/fd/63" inode=27016 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:04.922000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:04.921000 audit[4267]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc1e808a18 a2=241 a3=1b6 items=1 ppid=4234 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.921000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:29:04.921000 audit: PATH item=0 name="/dev/fd/63" inode=27000 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:04.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:04.965000 audit[4288]: AVC avc: denied { write } for pid=4288 comm="tee" name="fd" dev="proc" ino=26053 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:04.965000 audit[4288]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbae3da18 a2=241 a3=1b6 items=1 ppid=4233 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.965000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:29:04.965000 audit: PATH item=0 name="/dev/fd/63" inode=27021 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:04.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:04.970000 audit[4290]: AVC avc: denied { write } for pid=4290 comm="tee" name="fd" dev="proc" ino=27033 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:04.970000 audit[4290]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc29a10a19 a2=241 a3=1b6 items=1 ppid=4238 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.970000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:29:04.970000 audit: PATH item=0 name="/dev/fd/63" inode=27024 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:04.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:04.974000 audit[4293]: AVC avc: denied { write } for pid=4293 comm="tee" name="fd" dev="proc" ino=26057 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:04.974000 audit[4293]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd69dcaa18 a2=241 a3=1b6 items=1 ppid=4245 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.974000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:29:04.974000 audit: PATH item=0 name="/dev/fd/63" inode=26048 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:04.974000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:05.009000 audit[4305]: AVC avc: denied { write } for pid=4305 comm="tee" name="fd" dev="proc" ino=27044 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:05.009000 audit[4305]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb8b41a08 a2=241 a3=1b6 items=1 ppid=4258 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.009000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:29:05.009000 audit: PATH item=0 name="/dev/fd/63" inode=27037 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:05.009000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:05.283941 systemd[1]: run-containerd-runc-k8s.io-4e19be48daa643b128158c3525d15503df222db4c04a1d9e678b013ede6dede7-runc.gDt33O.mount: Deactivated successfully. Jun 25 16:29:05.430812 systemd[1]: Started sshd@7-172.31.18.172:22-139.178.89.65:35856.service - OpenSSH per-connection server daemon (139.178.89.65:35856). Jun 25 16:29:05.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.172:22-139.178.89.65:35856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:05.653000 audit[4354]: USER_ACCT pid=4354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:05.654671 sshd[4354]: Accepted publickey for core from 139.178.89.65 port 35856 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:05.654000 audit[4354]: CRED_ACQ pid=4354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:05.654000 audit[4354]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbc0a7dd0 a2=3 a3=7f255fe37480 items=0 ppid=1 pid=4354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.654000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:05.657418 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:05.665021 systemd-logind[1790]: New session 8 of user core. Jun 25 16:29:05.667482 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:29:05.673000 audit[4354]: USER_START pid=4354 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:05.675000 audit[4368]: CRED_ACQ pid=4368 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:05.802221 (udev-worker)[4179]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:29:05.822883 systemd-networkd[1531]: vxlan.calico: Link UP Jun 25 16:29:05.822893 systemd-networkd[1531]: vxlan.calico: Gained carrier Jun 25 16:29:05.976615 (udev-worker)[4404]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:29:05.981379 (udev-worker)[4409]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:29:06.006000 audit: BPF prog-id=135 op=LOAD Jun 25 16:29:06.006000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda3227ed0 a2=70 a3=7f21728c5000 items=0 ppid=4240 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:06.006000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:29:06.006000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:29:06.006000 audit: BPF prog-id=136 op=LOAD Jun 25 16:29:06.006000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda3227ed0 a2=70 a3=6f items=0 ppid=4240 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:06.006000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:29:06.006000 audit: BPF prog-id=136 op=UNLOAD Jun 25 16:29:06.006000 audit: BPF prog-id=137 op=LOAD Jun 25 16:29:06.006000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda3227e60 a2=70 a3=7ffda3227ed0 items=0 ppid=4240 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:06.006000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:29:06.006000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:29:06.010000 audit: BPF prog-id=138 op=LOAD Jun 25 16:29:06.010000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda3227e90 a2=70 a3=0 items=0 ppid=4240 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:06.010000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:29:06.029000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:29:06.119607 sshd[4354]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:06.121000 audit[4354]: USER_END pid=4354 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:06.122000 audit[4354]: CRED_DISP pid=4354 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:06.124567 systemd[1]: sshd@7-172.31.18.172:22-139.178.89.65:35856.service: Deactivated successfully. Jun 25 16:29:06.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.172:22-139.178.89.65:35856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:06.125685 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:29:06.126603 systemd-logind[1790]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:29:06.128133 systemd-logind[1790]: Removed session 8. Jun 25 16:29:06.148000 audit[4441]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4441 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:06.148000 audit[4441]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff4f4aec60 a2=0 a3=7fff4f4aec4c items=0 ppid=4240 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:06.148000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:06.154000 audit[4439]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=4439 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:06.154000 audit[4439]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffed763c3b0 a2=0 a3=7ffed763c39c items=0 ppid=4240 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:06.154000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:06.156000 audit[4438]: NETFILTER_CFG table=raw:99 family=2 entries=19 op=nft_register_chain pid=4438 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:06.156000 audit[4438]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffe08840ad0 a2=0 a3=7ffe08840abc items=0 ppid=4240 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:06.156000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:06.167000 audit[4444]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=4444 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:06.167000 audit[4444]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffeb09a1ea0 a2=0 a3=7ffeb09a1e8c items=0 ppid=4240 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:06.167000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:06.951330 containerd[1802]: time="2024-06-25T16:29:06.951275390Z" level=info msg="StopPodSandbox for \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\"" Jun 25 16:29:06.953619 containerd[1802]: time="2024-06-25T16:29:06.952739907Z" level=info msg="StopPodSandbox for \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\"" Jun 25 16:29:07.210436 systemd-networkd[1531]: vxlan.calico: Gained IPv6LL Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.103 [INFO][4505] k8s.go 608: Cleaning up netns ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.103 [INFO][4505] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" iface="eth0" netns="/var/run/netns/cni-3cfe94e7-b64e-40b7-7013-ade95ebd4eb2" Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.103 [INFO][4505] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" iface="eth0" netns="/var/run/netns/cni-3cfe94e7-b64e-40b7-7013-ade95ebd4eb2" Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.103 [INFO][4505] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" iface="eth0" netns="/var/run/netns/cni-3cfe94e7-b64e-40b7-7013-ade95ebd4eb2" Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.104 [INFO][4505] k8s.go 615: Releasing IP address(es) ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.104 [INFO][4505] utils.go 188: Calico CNI releasing IP address ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.520 [INFO][4516] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" HandleID="k8s-pod-network.0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.522 [INFO][4516] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.523 [INFO][4516] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.535 [WARNING][4516] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" HandleID="k8s-pod-network.0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.535 [INFO][4516] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" HandleID="k8s-pod-network.0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.537 [INFO][4516] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:07.550547 containerd[1802]: 2024-06-25 16:29:07.548 [INFO][4505] k8s.go 621: Teardown processing complete. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:07.552740 containerd[1802]: time="2024-06-25T16:29:07.550718783Z" level=info msg="TearDown network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\" successfully" Jun 25 16:29:07.552740 containerd[1802]: time="2024-06-25T16:29:07.550761009Z" level=info msg="StopPodSandbox for \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\" returns successfully" Jun 25 16:29:07.566758 systemd[1]: run-netns-cni\x2d3cfe94e7\x2db64e\x2d40b7\x2d7013\x2dade95ebd4eb2.mount: Deactivated successfully. Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.099 [INFO][4496] k8s.go 608: Cleaning up netns ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.100 [INFO][4496] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" iface="eth0" netns="/var/run/netns/cni-8c898ee8-db8a-733e-a59f-5bb729f92b23" Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.101 [INFO][4496] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" iface="eth0" netns="/var/run/netns/cni-8c898ee8-db8a-733e-a59f-5bb729f92b23" Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.101 [INFO][4496] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" iface="eth0" netns="/var/run/netns/cni-8c898ee8-db8a-733e-a59f-5bb729f92b23" Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.102 [INFO][4496] k8s.go 615: Releasing IP address(es) ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.102 [INFO][4496] utils.go 188: Calico CNI releasing IP address ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.520 [INFO][4515] ipam_plugin.go 411: Releasing address using handleID ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" HandleID="k8s-pod-network.61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.522 [INFO][4515] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.537 [INFO][4515] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.545 [WARNING][4515] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" HandleID="k8s-pod-network.61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.545 [INFO][4515] ipam_plugin.go 439: Releasing address using workloadID ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" HandleID="k8s-pod-network.61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.548 [INFO][4515] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:07.580664 containerd[1802]: 2024-06-25 16:29:07.570 [INFO][4496] k8s.go 621: Teardown processing complete. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:07.581588 containerd[1802]: time="2024-06-25T16:29:07.581546121Z" level=info msg="TearDown network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\" successfully" Jun 25 16:29:07.581731 containerd[1802]: time="2024-06-25T16:29:07.581707735Z" level=info msg="StopPodSandbox for \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\" returns successfully" Jun 25 16:29:07.588568 systemd[1]: run-netns-cni\x2d8c898ee8\x2ddb8a\x2d733e\x2da59f\x2d5bb729f92b23.mount: Deactivated successfully. Jun 25 16:29:07.626627 containerd[1802]: time="2024-06-25T16:29:07.626581613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-552qj,Uid:7ff87323-7adf-489a-b448-aa87f84c2db0,Namespace:kube-system,Attempt:1,}" Jun 25 16:29:07.629234 containerd[1802]: time="2024-06-25T16:29:07.629173329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f7s8f,Uid:cb1b6fee-76dd-4b53-8bcb-f17d750a370e,Namespace:calico-system,Attempt:1,}" Jun 25 16:29:07.881932 systemd-networkd[1531]: calicd51fa00698: Link UP Jun 25 16:29:07.895538 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:29:07.895668 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicd51fa00698: link becomes ready Jun 25 16:29:07.895922 systemd-networkd[1531]: calicd51fa00698: Gained carrier Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.746 [INFO][4527] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0 csi-node-driver- calico-system cb1b6fee-76dd-4b53-8bcb-f17d750a370e 755 0 2024-06-25 16:28:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-18-172 csi-node-driver-f7s8f eth0 default [] [] [kns.calico-system ksa.calico-system.default] calicd51fa00698 [] []}} ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Namespace="calico-system" Pod="csi-node-driver-f7s8f" WorkloadEndpoint="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.746 [INFO][4527] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Namespace="calico-system" Pod="csi-node-driver-f7s8f" WorkloadEndpoint="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.817 [INFO][4549] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" HandleID="k8s-pod-network.260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.832 [INFO][4549] ipam_plugin.go 264: Auto assigning IP ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" HandleID="k8s-pod-network.260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000581a40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-172", "pod":"csi-node-driver-f7s8f", "timestamp":"2024-06-25 16:29:07.817674423 +0000 UTC"}, Hostname:"ip-172-31-18-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.832 [INFO][4549] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.833 [INFO][4549] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.833 [INFO][4549] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-172' Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.836 [INFO][4549] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" host="ip-172-31-18-172" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.848 [INFO][4549] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-172" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.854 [INFO][4549] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.856 [INFO][4549] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.858 [INFO][4549] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.859 [INFO][4549] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" host="ip-172-31-18-172" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.861 [INFO][4549] ipam.go 1685: Creating new handle: k8s-pod-network.260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76 Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.866 [INFO][4549] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" host="ip-172-31-18-172" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.872 [INFO][4549] ipam.go 1216: Successfully claimed IPs: [192.168.52.193/26] block=192.168.52.192/26 handle="k8s-pod-network.260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" host="ip-172-31-18-172" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.872 [INFO][4549] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.193/26] handle="k8s-pod-network.260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" host="ip-172-31-18-172" Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.872 [INFO][4549] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:07.965289 containerd[1802]: 2024-06-25 16:29:07.873 [INFO][4549] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.193/26] IPv6=[] ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" HandleID="k8s-pod-network.260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:07.967132 containerd[1802]: 2024-06-25 16:29:07.878 [INFO][4527] k8s.go 386: Populated endpoint ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Namespace="calico-system" Pod="csi-node-driver-f7s8f" WorkloadEndpoint="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb1b6fee-76dd-4b53-8bcb-f17d750a370e", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"", Pod:"csi-node-driver-f7s8f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicd51fa00698", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:07.967132 containerd[1802]: 2024-06-25 16:29:07.878 [INFO][4527] k8s.go 387: Calico CNI using IPs: [192.168.52.193/32] ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Namespace="calico-system" Pod="csi-node-driver-f7s8f" WorkloadEndpoint="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:07.967132 containerd[1802]: 2024-06-25 16:29:07.878 [INFO][4527] dataplane_linux.go 68: Setting the host side veth name to calicd51fa00698 ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Namespace="calico-system" Pod="csi-node-driver-f7s8f" WorkloadEndpoint="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:07.967132 containerd[1802]: 2024-06-25 16:29:07.898 [INFO][4527] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Namespace="calico-system" Pod="csi-node-driver-f7s8f" WorkloadEndpoint="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:07.967132 containerd[1802]: 2024-06-25 16:29:07.899 [INFO][4527] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Namespace="calico-system" Pod="csi-node-driver-f7s8f" WorkloadEndpoint="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb1b6fee-76dd-4b53-8bcb-f17d750a370e", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76", Pod:"csi-node-driver-f7s8f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicd51fa00698", MAC:"0a:51:6d:e8:e1:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:07.967132 containerd[1802]: 2024-06-25 16:29:07.963 [INFO][4527] k8s.go 500: Wrote updated endpoint to datastore ContainerID="260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76" Namespace="calico-system" Pod="csi-node-driver-f7s8f" WorkloadEndpoint="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:08.042348 systemd-networkd[1531]: cali4134d681d19: Link UP Jun 25 16:29:08.045018 systemd-networkd[1531]: cali4134d681d19: Gained carrier Jun 25 16:29:08.045375 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4134d681d19: link becomes ready Jun 25 16:29:08.062658 containerd[1802]: time="2024-06-25T16:29:08.062509799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:08.062828 containerd[1802]: time="2024-06-25T16:29:08.062697654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:08.062828 containerd[1802]: time="2024-06-25T16:29:08.062767539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:08.062931 containerd[1802]: time="2024-06-25T16:29:08.062805918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.784 [INFO][4538] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0 coredns-76f75df574- kube-system 7ff87323-7adf-489a-b448-aa87f84c2db0 756 0 2024-06-25 16:28:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-172 coredns-76f75df574-552qj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4134d681d19 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Namespace="kube-system" Pod="coredns-76f75df574-552qj" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.785 [INFO][4538] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Namespace="kube-system" Pod="coredns-76f75df574-552qj" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.841 [INFO][4555] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" HandleID="k8s-pod-network.89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.852 [INFO][4555] ipam_plugin.go 264: Auto assigning IP ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" HandleID="k8s-pod-network.89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e59f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-172", "pod":"coredns-76f75df574-552qj", "timestamp":"2024-06-25 16:29:07.840990762 +0000 UTC"}, Hostname:"ip-172-31-18-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.852 [INFO][4555] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.872 [INFO][4555] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.873 [INFO][4555] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-172' Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.877 [INFO][4555] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" host="ip-172-31-18-172" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.891 [INFO][4555] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-172" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.921 [INFO][4555] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.960 [INFO][4555] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.970 [INFO][4555] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.970 [INFO][4555] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" host="ip-172-31-18-172" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.973 [INFO][4555] ipam.go 1685: Creating new handle: k8s-pod-network.89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70 Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:07.979 [INFO][4555] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" host="ip-172-31-18-172" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:08.016 [INFO][4555] ipam.go 1216: Successfully claimed IPs: [192.168.52.194/26] block=192.168.52.192/26 handle="k8s-pod-network.89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" host="ip-172-31-18-172" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:08.016 [INFO][4555] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.194/26] handle="k8s-pod-network.89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" host="ip-172-31-18-172" Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:08.016 [INFO][4555] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:08.067782 containerd[1802]: 2024-06-25 16:29:08.016 [INFO][4555] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.194/26] IPv6=[] ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" HandleID="k8s-pod-network.89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:08.068745 containerd[1802]: 2024-06-25 16:29:08.019 [INFO][4538] k8s.go 386: Populated endpoint ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Namespace="kube-system" Pod="coredns-76f75df574-552qj" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ff87323-7adf-489a-b448-aa87f84c2db0", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"", Pod:"coredns-76f75df574-552qj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4134d681d19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:08.068745 containerd[1802]: 2024-06-25 16:29:08.019 [INFO][4538] k8s.go 387: Calico CNI using IPs: [192.168.52.194/32] ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Namespace="kube-system" Pod="coredns-76f75df574-552qj" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:08.068745 containerd[1802]: 2024-06-25 16:29:08.019 [INFO][4538] dataplane_linux.go 68: Setting the host side veth name to cali4134d681d19 ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Namespace="kube-system" Pod="coredns-76f75df574-552qj" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:08.068745 containerd[1802]: 2024-06-25 16:29:08.046 [INFO][4538] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Namespace="kube-system" Pod="coredns-76f75df574-552qj" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:08.068745 containerd[1802]: 2024-06-25 16:29:08.046 [INFO][4538] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Namespace="kube-system" Pod="coredns-76f75df574-552qj" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ff87323-7adf-489a-b448-aa87f84c2db0", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70", Pod:"coredns-76f75df574-552qj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4134d681d19", MAC:"7e:11:59:fd:be:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:08.068745 containerd[1802]: 2024-06-25 16:29:08.063 [INFO][4538] k8s.go 500: Wrote updated endpoint to datastore ContainerID="89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70" Namespace="kube-system" Pod="coredns-76f75df574-552qj" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:08.088000 audit[4586]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=4586 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:08.088000 audit[4586]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffc8aee8f70 a2=0 a3=7ffc8aee8f5c items=0 ppid=4240 pid=4586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.088000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:08.102567 systemd[1]: Started cri-containerd-260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76.scope - libcontainer container 260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76. Jun 25 16:29:08.148000 audit: BPF prog-id=139 op=LOAD Jun 25 16:29:08.153000 audit: BPF prog-id=140 op=LOAD Jun 25 16:29:08.153000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4583 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236306432326438376465636138636561613966666238313634303761 Jun 25 16:29:08.153000 audit: BPF prog-id=141 op=LOAD Jun 25 16:29:08.153000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4583 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236306432326438376465636138636561613966666238313634303761 Jun 25 16:29:08.153000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:29:08.153000 audit: BPF prog-id=140 op=UNLOAD Jun 25 16:29:08.153000 audit: BPF prog-id=142 op=LOAD Jun 25 16:29:08.153000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4583 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236306432326438376465636138636561613966666238313634303761 Jun 25 16:29:08.164000 containerd[1802]: time="2024-06-25T16:29:08.163789779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:08.164807 containerd[1802]: time="2024-06-25T16:29:08.164730572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:08.165880 containerd[1802]: time="2024-06-25T16:29:08.164983872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:08.166777 containerd[1802]: time="2024-06-25T16:29:08.166060408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:08.171000 audit[4634]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=4634 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:08.171000 audit[4634]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffec677c550 a2=0 a3=7ffec677c53c items=0 ppid=4240 pid=4634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.171000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:08.201839 containerd[1802]: time="2024-06-25T16:29:08.201786682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f7s8f,Uid:cb1b6fee-76dd-4b53-8bcb-f17d750a370e,Namespace:calico-system,Attempt:1,} returns sandbox id \"260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76\"" Jun 25 16:29:08.236379 systemd[1]: Started cri-containerd-89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70.scope - libcontainer container 89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70. Jun 25 16:29:08.256000 audit: BPF prog-id=143 op=LOAD Jun 25 16:29:08.258841 containerd[1802]: time="2024-06-25T16:29:08.258797907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:29:08.259000 audit: BPF prog-id=144 op=LOAD Jun 25 16:29:08.259000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4627 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839646433343531313363333139343031323430643266363931666662 Jun 25 16:29:08.260000 audit: BPF prog-id=145 op=LOAD Jun 25 16:29:08.260000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4627 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839646433343531313363333139343031323430643266363931666662 Jun 25 16:29:08.261000 audit: BPF prog-id=145 op=UNLOAD Jun 25 16:29:08.261000 audit: BPF prog-id=144 op=UNLOAD Jun 25 16:29:08.262000 audit: BPF prog-id=146 op=LOAD Jun 25 16:29:08.262000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4627 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839646433343531313363333139343031323430643266363931666662 Jun 25 16:29:08.312945 containerd[1802]: time="2024-06-25T16:29:08.312897724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-552qj,Uid:7ff87323-7adf-489a-b448-aa87f84c2db0,Namespace:kube-system,Attempt:1,} returns sandbox id \"89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70\"" Jun 25 16:29:08.336297 containerd[1802]: time="2024-06-25T16:29:08.335952619Z" level=info msg="CreateContainer within sandbox \"89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:29:08.393121 containerd[1802]: time="2024-06-25T16:29:08.393058895Z" level=info msg="CreateContainer within sandbox \"89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d25e9d4383049482b044032b02844d0e356bb8ba3734a3ef0a282eb26cd47314\"" Jun 25 16:29:08.396493 containerd[1802]: time="2024-06-25T16:29:08.396451594Z" level=info msg="StartContainer for \"d25e9d4383049482b044032b02844d0e356bb8ba3734a3ef0a282eb26cd47314\"" Jun 25 16:29:08.426474 systemd[1]: Started cri-containerd-d25e9d4383049482b044032b02844d0e356bb8ba3734a3ef0a282eb26cd47314.scope - libcontainer container d25e9d4383049482b044032b02844d0e356bb8ba3734a3ef0a282eb26cd47314. Jun 25 16:29:08.446000 audit: BPF prog-id=147 op=LOAD Jun 25 16:29:08.447000 audit: BPF prog-id=148 op=LOAD Jun 25 16:29:08.447000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4627 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356539643433383330343934383262303434303332623032383434 Jun 25 16:29:08.447000 audit: BPF prog-id=149 op=LOAD Jun 25 16:29:08.447000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4627 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356539643433383330343934383262303434303332623032383434 Jun 25 16:29:08.447000 audit: BPF prog-id=149 op=UNLOAD Jun 25 16:29:08.447000 audit: BPF prog-id=148 op=UNLOAD Jun 25 16:29:08.447000 audit: BPF prog-id=150 op=LOAD Jun 25 16:29:08.447000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4627 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:08.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356539643433383330343934383262303434303332623032383434 Jun 25 16:29:08.468144 containerd[1802]: time="2024-06-25T16:29:08.468094373Z" level=info msg="StartContainer for \"d25e9d4383049482b044032b02844d0e356bb8ba3734a3ef0a282eb26cd47314\" returns successfully" Jun 25 16:29:08.951653 containerd[1802]: time="2024-06-25T16:29:08.951606256Z" level=info msg="StopPodSandbox for \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\"" Jun 25 16:29:08.952248 containerd[1802]: time="2024-06-25T16:29:08.951884216Z" level=info msg="StopPodSandbox for \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\"" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.078 [INFO][4728] k8s.go 608: Cleaning up netns ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.079 [INFO][4728] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" iface="eth0" netns="/var/run/netns/cni-a54317d0-f52b-0ef0-5859-176db9e8b07c" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.079 [INFO][4728] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" iface="eth0" netns="/var/run/netns/cni-a54317d0-f52b-0ef0-5859-176db9e8b07c" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.079 [INFO][4728] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" iface="eth0" netns="/var/run/netns/cni-a54317d0-f52b-0ef0-5859-176db9e8b07c" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.079 [INFO][4728] k8s.go 615: Releasing IP address(es) ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.079 [INFO][4728] utils.go 188: Calico CNI releasing IP address ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.163 [INFO][4751] ipam_plugin.go 411: Releasing address using handleID ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" HandleID="k8s-pod-network.6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.166 [INFO][4751] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.168 [INFO][4751] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.186 [WARNING][4751] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" HandleID="k8s-pod-network.6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.186 [INFO][4751] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" HandleID="k8s-pod-network.6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.188 [INFO][4751] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:09.196901 containerd[1802]: 2024-06-25 16:29:09.191 [INFO][4728] k8s.go 621: Teardown processing complete. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:09.205829 containerd[1802]: time="2024-06-25T16:29:09.202230991Z" level=info msg="TearDown network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\" successfully" Jun 25 16:29:09.205829 containerd[1802]: time="2024-06-25T16:29:09.202279934Z" level=info msg="StopPodSandbox for \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\" returns successfully" Jun 25 16:29:09.205829 containerd[1802]: time="2024-06-25T16:29:09.203053265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xjl2t,Uid:12cf6b5e-7d0a-4601-b457-20c98f952e2c,Namespace:kube-system,Attempt:1,}" Jun 25 16:29:09.200476 systemd[1]: run-netns-cni\x2da54317d0\x2df52b\x2d0ef0\x2d5859\x2d176db9e8b07c.mount: Deactivated successfully. Jun 25 16:29:09.278517 systemd-networkd[1531]: cali4134d681d19: Gained IPv6LL Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.111 [INFO][4737] k8s.go 608: Cleaning up netns ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.111 [INFO][4737] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" iface="eth0" netns="/var/run/netns/cni-241fbd82-91f5-7030-afed-7668576f5f42" Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.111 [INFO][4737] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" iface="eth0" netns="/var/run/netns/cni-241fbd82-91f5-7030-afed-7668576f5f42" Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.112 [INFO][4737] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" iface="eth0" netns="/var/run/netns/cni-241fbd82-91f5-7030-afed-7668576f5f42" Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.112 [INFO][4737] k8s.go 615: Releasing IP address(es) ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.112 [INFO][4737] utils.go 188: Calico CNI releasing IP address ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.248 [INFO][4756] ipam_plugin.go 411: Releasing address using handleID ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" HandleID="k8s-pod-network.e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.248 [INFO][4756] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.248 [INFO][4756] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.343 [WARNING][4756] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" HandleID="k8s-pod-network.e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.343 [INFO][4756] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" HandleID="k8s-pod-network.e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.350 [INFO][4756] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:09.372397 containerd[1802]: 2024-06-25 16:29:09.365 [INFO][4737] k8s.go 621: Teardown processing complete. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:09.377447 systemd[1]: run-netns-cni\x2d241fbd82\x2d91f5\x2d7030\x2dafed\x2d7668576f5f42.mount: Deactivated successfully. Jun 25 16:29:09.379622 containerd[1802]: time="2024-06-25T16:29:09.377735153Z" level=info msg="TearDown network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\" successfully" Jun 25 16:29:09.379622 containerd[1802]: time="2024-06-25T16:29:09.377781997Z" level=info msg="StopPodSandbox for \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\" returns successfully" Jun 25 16:29:09.380883 containerd[1802]: time="2024-06-25T16:29:09.380840984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796c95c4bb-vmlqt,Uid:1f02df65-1d0e-4c87-89a7-023877ca1122,Namespace:calico-system,Attempt:1,}" Jun 25 16:29:09.442653 kubelet[3202]: I0625 16:29:09.442617 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-552qj" podStartSLOduration=35.442538494 podStartE2EDuration="35.442538494s" podCreationTimestamp="2024-06-25 16:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:29:09.40660127 +0000 UTC m=+48.698833545" watchObservedRunningTime="2024-06-25 16:29:09.442538494 +0000 UTC m=+48.734770770" Jun 25 16:29:09.640422 systemd-networkd[1531]: calicd51fa00698: Gained IPv6LL Jun 25 16:29:09.656322 systemd-networkd[1531]: caliab256417be3: Link UP Jun 25 16:29:09.657487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliab256417be3: link becomes ready Jun 25 16:29:09.657302 systemd-networkd[1531]: caliab256417be3: Gained carrier Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.449 [INFO][4763] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0 coredns-76f75df574- kube-system 12cf6b5e-7d0a-4601-b457-20c98f952e2c 779 0 2024-06-25 16:28:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-172 coredns-76f75df574-xjl2t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab256417be3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Namespace="kube-system" Pod="coredns-76f75df574-xjl2t" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.450 [INFO][4763] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Namespace="kube-system" Pod="coredns-76f75df574-xjl2t" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.542 [INFO][4788] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" HandleID="k8s-pod-network.168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.601 [INFO][4788] ipam_plugin.go 264: Auto assigning IP ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" HandleID="k8s-pod-network.168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003123f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-172", "pod":"coredns-76f75df574-xjl2t", "timestamp":"2024-06-25 16:29:09.542332509 +0000 UTC"}, Hostname:"ip-172-31-18-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.601 [INFO][4788] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.601 [INFO][4788] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.601 [INFO][4788] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-172' Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.607 [INFO][4788] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" host="ip-172-31-18-172" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.613 [INFO][4788] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-172" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.618 [INFO][4788] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.622 [INFO][4788] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.625 [INFO][4788] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.625 [INFO][4788] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" host="ip-172-31-18-172" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.630 [INFO][4788] ipam.go 1685: Creating new handle: k8s-pod-network.168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763 Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.635 [INFO][4788] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" host="ip-172-31-18-172" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.649 [INFO][4788] ipam.go 1216: Successfully claimed IPs: [192.168.52.195/26] block=192.168.52.192/26 handle="k8s-pod-network.168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" host="ip-172-31-18-172" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.649 [INFO][4788] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.195/26] handle="k8s-pod-network.168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" host="ip-172-31-18-172" Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.649 [INFO][4788] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:09.677357 containerd[1802]: 2024-06-25 16:29:09.650 [INFO][4788] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.195/26] IPv6=[] ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" HandleID="k8s-pod-network.168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.679533 containerd[1802]: 2024-06-25 16:29:09.652 [INFO][4763] k8s.go 386: Populated endpoint ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Namespace="kube-system" Pod="coredns-76f75df574-xjl2t" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"12cf6b5e-7d0a-4601-b457-20c98f952e2c", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"", Pod:"coredns-76f75df574-xjl2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab256417be3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:09.679533 containerd[1802]: 2024-06-25 16:29:09.652 [INFO][4763] k8s.go 387: Calico CNI using IPs: [192.168.52.195/32] ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Namespace="kube-system" Pod="coredns-76f75df574-xjl2t" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.679533 containerd[1802]: 2024-06-25 16:29:09.652 [INFO][4763] dataplane_linux.go 68: Setting the host side veth name to caliab256417be3 ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Namespace="kube-system" Pod="coredns-76f75df574-xjl2t" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.679533 containerd[1802]: 2024-06-25 16:29:09.656 [INFO][4763] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Namespace="kube-system" Pod="coredns-76f75df574-xjl2t" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.679533 containerd[1802]: 2024-06-25 16:29:09.657 [INFO][4763] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Namespace="kube-system" Pod="coredns-76f75df574-xjl2t" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"12cf6b5e-7d0a-4601-b457-20c98f952e2c", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763", Pod:"coredns-76f75df574-xjl2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab256417be3", MAC:"ba:81:c3:33:9a:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:09.679533 containerd[1802]: 2024-06-25 16:29:09.674 [INFO][4763] k8s.go 500: Wrote updated endpoint to datastore ContainerID="168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763" Namespace="kube-system" Pod="coredns-76f75df574-xjl2t" WorkloadEndpoint="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:09.687000 audit[4803]: NETFILTER_CFG table=filter:103 family=2 entries=11 op=nft_register_rule pid=4803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:09.687000 audit[4803]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd32903a00 a2=0 a3=7ffd329039ec items=0 ppid=3352 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:09.692000 audit[4803]: NETFILTER_CFG table=nat:104 family=2 entries=35 op=nft_register_chain pid=4803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:09.692000 audit[4803]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd32903a00 a2=0 a3=7ffd329039ec items=0 ppid=3352 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:09.775000 audit[4825]: NETFILTER_CFG table=filter:105 family=2 entries=34 op=nft_register_chain pid=4825 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:09.775000 audit[4825]: SYSCALL arch=c000003e syscall=46 success=yes exit=18220 a0=3 a1=7fff72ca9aa0 a2=0 a3=7fff72ca9a8c items=0 ppid=4240 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.775000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:09.837343 systemd-networkd[1531]: calicf7064d710c: Link UP Jun 25 16:29:09.839541 containerd[1802]: time="2024-06-25T16:29:09.839232531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:09.839541 containerd[1802]: time="2024-06-25T16:29:09.839316689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:09.839541 containerd[1802]: time="2024-06-25T16:29:09.839356102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:09.839541 containerd[1802]: time="2024-06-25T16:29:09.839373170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:09.842637 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicf7064d710c: link becomes ready Jun 25 16:29:09.841872 systemd-networkd[1531]: calicf7064d710c: Gained carrier Jun 25 16:29:09.896647 kernel: kauditd_printk_skb: 115 callbacks suppressed Jun 25 16:29:09.896767 kernel: audit: type=1325 audit(1719332949.894:560): table=filter:106 family=2 entries=8 op=nft_register_rule pid=4851 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:09.894000 audit[4851]: NETFILTER_CFG table=filter:106 family=2 entries=8 op=nft_register_rule pid=4851 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:09.899413 systemd[1]: Started cri-containerd-168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763.scope - libcontainer container 168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763. Jun 25 16:29:09.908069 kernel: audit: type=1300 audit(1719332949.894:560): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffcf21f490 a2=0 a3=7fffcf21f47c items=0 ppid=3352 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.908219 kernel: audit: type=1327 audit(1719332949.894:560): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:09.894000 audit[4851]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffcf21f490 a2=0 a3=7fffcf21f47c items=0 ppid=3352 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:09.911222 kernel: audit: type=1325 audit(1719332949.903:561): table=nat:107 family=2 entries=20 op=nft_register_rule pid=4851 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:09.903000 audit[4851]: NETFILTER_CFG table=nat:107 family=2 entries=20 op=nft_register_rule pid=4851 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:09.903000 audit[4851]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffcf21f490 a2=0 a3=7fffcf21f47c items=0 ppid=3352 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.917413 kernel: audit: type=1300 audit(1719332949.903:561): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffcf21f490 a2=0 a3=7fffcf21f47c items=0 ppid=3352 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.496 [INFO][4776] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0 calico-kube-controllers-796c95c4bb- calico-system 1f02df65-1d0e-4c87-89a7-023877ca1122 780 0 2024-06-25 16:28:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:796c95c4bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-172 calico-kube-controllers-796c95c4bb-vmlqt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicf7064d710c [] []}} ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Namespace="calico-system" Pod="calico-kube-controllers-796c95c4bb-vmlqt" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.496 [INFO][4776] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Namespace="calico-system" Pod="calico-kube-controllers-796c95c4bb-vmlqt" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.589 [INFO][4794] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" HandleID="k8s-pod-network.0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.612 [INFO][4794] ipam_plugin.go 264: Auto assigning IP ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" HandleID="k8s-pod-network.0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7e20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-172", "pod":"calico-kube-controllers-796c95c4bb-vmlqt", "timestamp":"2024-06-25 16:29:09.589600024 +0000 UTC"}, Hostname:"ip-172-31-18-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.612 [INFO][4794] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.649 [INFO][4794] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.650 [INFO][4794] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-172' Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.652 [INFO][4794] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" host="ip-172-31-18-172" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.689 [INFO][4794] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-172" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.742 [INFO][4794] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.747 [INFO][4794] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.780 [INFO][4794] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.780 [INFO][4794] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" host="ip-172-31-18-172" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.787 [INFO][4794] ipam.go 1685: Creating new handle: k8s-pod-network.0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6 Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.809 [INFO][4794] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" host="ip-172-31-18-172" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.831 [INFO][4794] ipam.go 1216: Successfully claimed IPs: [192.168.52.196/26] block=192.168.52.192/26 handle="k8s-pod-network.0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" host="ip-172-31-18-172" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.831 [INFO][4794] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.196/26] handle="k8s-pod-network.0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" host="ip-172-31-18-172" Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.831 [INFO][4794] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:09.941508 containerd[1802]: 2024-06-25 16:29:09.831 [INFO][4794] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.196/26] IPv6=[] ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" HandleID="k8s-pod-network.0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.942727 kernel: audit: type=1327 audit(1719332949.903:561): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:09.942771 containerd[1802]: 2024-06-25 16:29:09.834 [INFO][4776] k8s.go 386: Populated endpoint ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Namespace="calico-system" Pod="calico-kube-controllers-796c95c4bb-vmlqt" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0", GenerateName:"calico-kube-controllers-796c95c4bb-", Namespace:"calico-system", SelfLink:"", UID:"1f02df65-1d0e-4c87-89a7-023877ca1122", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796c95c4bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"", Pod:"calico-kube-controllers-796c95c4bb-vmlqt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf7064d710c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:09.942771 containerd[1802]: 2024-06-25 16:29:09.834 [INFO][4776] k8s.go 387: Calico CNI using IPs: [192.168.52.196/32] ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Namespace="calico-system" Pod="calico-kube-controllers-796c95c4bb-vmlqt" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.942771 containerd[1802]: 2024-06-25 16:29:09.834 [INFO][4776] dataplane_linux.go 68: Setting the host side veth name to calicf7064d710c ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Namespace="calico-system" Pod="calico-kube-controllers-796c95c4bb-vmlqt" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.942771 containerd[1802]: 2024-06-25 16:29:09.845 [INFO][4776] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Namespace="calico-system" Pod="calico-kube-controllers-796c95c4bb-vmlqt" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.942771 containerd[1802]: 2024-06-25 16:29:09.845 [INFO][4776] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Namespace="calico-system" Pod="calico-kube-controllers-796c95c4bb-vmlqt" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0", GenerateName:"calico-kube-controllers-796c95c4bb-", Namespace:"calico-system", SelfLink:"", UID:"1f02df65-1d0e-4c87-89a7-023877ca1122", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796c95c4bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6", Pod:"calico-kube-controllers-796c95c4bb-vmlqt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf7064d710c", MAC:"96:30:1f:e8:7e:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:09.942771 containerd[1802]: 2024-06-25 16:29:09.919 [INFO][4776] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6" Namespace="calico-system" Pod="calico-kube-controllers-796c95c4bb-vmlqt" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:09.962569 kernel: audit: type=1334 audit(1719332949.958:562): prog-id=151 op=LOAD Jun 25 16:29:09.962692 kernel: audit: type=1334 audit(1719332949.958:563): prog-id=152 op=LOAD Jun 25 16:29:09.962731 kernel: audit: type=1300 audit(1719332949.958:563): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4827 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.958000 audit: BPF prog-id=151 op=LOAD Jun 25 16:29:09.958000 audit: BPF prog-id=152 op=LOAD Jun 25 16:29:09.958000 audit[4842]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4827 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.971259 kernel: audit: type=1327 audit(1719332949.958:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136383036396531393864623934633462306332353232323261363735 Jun 25 16:29:09.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136383036396531393864623934633462306332353232323261363735 Jun 25 16:29:09.958000 audit: BPF prog-id=153 op=LOAD Jun 25 16:29:09.958000 audit[4842]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4827 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136383036396531393864623934633462306332353232323261363735 Jun 25 16:29:09.958000 audit: BPF prog-id=153 op=UNLOAD Jun 25 16:29:09.958000 audit: BPF prog-id=152 op=UNLOAD Jun 25 16:29:09.958000 audit: BPF prog-id=154 op=LOAD Jun 25 16:29:09.958000 audit[4842]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4827 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136383036396531393864623934633462306332353232323261363735 Jun 25 16:29:09.994000 audit[4866]: NETFILTER_CFG table=filter:108 family=2 entries=48 op=nft_register_chain pid=4866 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:09.994000 audit[4866]: SYSCALL arch=c000003e syscall=46 success=yes exit=23868 a0=3 a1=7ffd48949b60 a2=0 a3=7ffd48949b4c items=0 ppid=4240 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.994000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:10.140685 containerd[1802]: time="2024-06-25T16:29:10.140336708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:10.140685 containerd[1802]: time="2024-06-25T16:29:10.140427491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:10.140685 containerd[1802]: time="2024-06-25T16:29:10.140456768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:10.140685 containerd[1802]: time="2024-06-25T16:29:10.140472008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:10.179469 containerd[1802]: time="2024-06-25T16:29:10.171847940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xjl2t,Uid:12cf6b5e-7d0a-4601-b457-20c98f952e2c,Namespace:kube-system,Attempt:1,} returns sandbox id \"168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763\"" Jun 25 16:29:10.198968 containerd[1802]: time="2024-06-25T16:29:10.198917833Z" level=info msg="CreateContainer within sandbox \"168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:29:10.222446 systemd[1]: Started cri-containerd-0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6.scope - libcontainer container 0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6. Jun 25 16:29:10.251473 containerd[1802]: time="2024-06-25T16:29:10.251324968Z" level=info msg="CreateContainer within sandbox \"168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14a95621ae47988f85df514dc1874b5044fa1e77048a221a1747134003134b16\"" Jun 25 16:29:10.252903 containerd[1802]: time="2024-06-25T16:29:10.252623359Z" level=info msg="StartContainer for \"14a95621ae47988f85df514dc1874b5044fa1e77048a221a1747134003134b16\"" Jun 25 16:29:10.302000 audit: BPF prog-id=155 op=LOAD Jun 25 16:29:10.303000 audit: BPF prog-id=156 op=LOAD Jun 25 16:29:10.303000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4880 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030333761396331643539333332653035656238666630373566636361 Jun 25 16:29:10.303000 audit: BPF prog-id=157 op=LOAD Jun 25 16:29:10.303000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4880 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030333761396331643539333332653035656238666630373566636361 Jun 25 16:29:10.303000 audit: BPF prog-id=157 op=UNLOAD Jun 25 16:29:10.303000 audit: BPF prog-id=156 op=UNLOAD Jun 25 16:29:10.303000 audit: BPF prog-id=158 op=LOAD Jun 25 16:29:10.303000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4880 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030333761396331643539333332653035656238666630373566636361 Jun 25 16:29:10.439150 containerd[1802]: time="2024-06-25T16:29:10.437996213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796c95c4bb-vmlqt,Uid:1f02df65-1d0e-4c87-89a7-023877ca1122,Namespace:calico-system,Attempt:1,} returns sandbox id \"0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6\"" Jun 25 16:29:10.455450 systemd[1]: Started cri-containerd-14a95621ae47988f85df514dc1874b5044fa1e77048a221a1747134003134b16.scope - libcontainer container 14a95621ae47988f85df514dc1874b5044fa1e77048a221a1747134003134b16. Jun 25 16:29:10.482000 audit: BPF prog-id=159 op=LOAD Jun 25 16:29:10.483000 audit: BPF prog-id=160 op=LOAD Jun 25 16:29:10.483000 audit[4923]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4827 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134613935363231616534373938386638356466353134646331383734 Jun 25 16:29:10.484000 audit: BPF prog-id=161 op=LOAD Jun 25 16:29:10.484000 audit[4923]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4827 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134613935363231616534373938386638356466353134646331383734 Jun 25 16:29:10.484000 audit: BPF prog-id=161 op=UNLOAD Jun 25 16:29:10.485000 audit: BPF prog-id=160 op=UNLOAD Jun 25 16:29:10.485000 audit: BPF prog-id=162 op=LOAD Jun 25 16:29:10.485000 audit[4923]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4827 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134613935363231616534373938386638356466353134646331383734 Jun 25 16:29:10.518304 containerd[1802]: time="2024-06-25T16:29:10.518251373Z" level=info msg="StartContainer for \"14a95621ae47988f85df514dc1874b5044fa1e77048a221a1747134003134b16\" returns successfully" Jun 25 16:29:10.522686 containerd[1802]: time="2024-06-25T16:29:10.522643752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:10.525285 containerd[1802]: time="2024-06-25T16:29:10.525215101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:29:10.528039 containerd[1802]: time="2024-06-25T16:29:10.527997338Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:10.537215 containerd[1802]: time="2024-06-25T16:29:10.534140951Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:10.538339 containerd[1802]: time="2024-06-25T16:29:10.538291718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:10.540522 containerd[1802]: time="2024-06-25T16:29:10.540457303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.281603718s" Jun 25 16:29:10.540651 containerd[1802]: time="2024-06-25T16:29:10.540528189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:29:10.542598 containerd[1802]: time="2024-06-25T16:29:10.542555550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:29:10.543735 containerd[1802]: time="2024-06-25T16:29:10.543698846Z" level=info msg="CreateContainer within sandbox \"260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:29:10.575657 containerd[1802]: time="2024-06-25T16:29:10.575604055Z" level=info msg="CreateContainer within sandbox \"260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9571ff9427a01b19023c6e7003bbc56748e2ca090bb2246f297853aca6508d05\"" Jun 25 16:29:10.576625 containerd[1802]: time="2024-06-25T16:29:10.576580805Z" level=info msg="StartContainer for \"9571ff9427a01b19023c6e7003bbc56748e2ca090bb2246f297853aca6508d05\"" Jun 25 16:29:10.685413 systemd[1]: Started cri-containerd-9571ff9427a01b19023c6e7003bbc56748e2ca090bb2246f297853aca6508d05.scope - libcontainer container 9571ff9427a01b19023c6e7003bbc56748e2ca090bb2246f297853aca6508d05. Jun 25 16:29:10.722000 audit: BPF prog-id=163 op=LOAD Jun 25 16:29:10.722000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4583 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166663934323761303162313930323363366537303033626263 Jun 25 16:29:10.722000 audit: BPF prog-id=164 op=LOAD Jun 25 16:29:10.722000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4583 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166663934323761303162313930323363366537303033626263 Jun 25 16:29:10.722000 audit: BPF prog-id=164 op=UNLOAD Jun 25 16:29:10.722000 audit: BPF prog-id=163 op=UNLOAD Jun 25 16:29:10.722000 audit: BPF prog-id=165 op=LOAD Jun 25 16:29:10.722000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4583 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166663934323761303162313930323363366537303033626263 Jun 25 16:29:10.752885 containerd[1802]: time="2024-06-25T16:29:10.752824209Z" level=info msg="StartContainer for \"9571ff9427a01b19023c6e7003bbc56748e2ca090bb2246f297853aca6508d05\" returns successfully" Jun 25 16:29:11.155877 systemd[1]: Started sshd@8-172.31.18.172:22-139.178.89.65:50638.service - OpenSSH per-connection server daemon (139.178.89.65:50638). Jun 25 16:29:11.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.172:22-139.178.89.65:50638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:11.205439 systemd-networkd[1531]: calicf7064d710c: Gained IPv6LL Jun 25 16:29:11.410589 kubelet[3202]: I0625 16:29:11.410473 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xjl2t" podStartSLOduration=37.410397083 podStartE2EDuration="37.410397083s" podCreationTimestamp="2024-06-25 16:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:29:11.410283808 +0000 UTC m=+50.702516096" watchObservedRunningTime="2024-06-25 16:29:11.410397083 +0000 UTC m=+50.702629359" Jun 25 16:29:11.432000 audit[4998]: USER_ACCT pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:11.433001 sshd[4998]: Accepted publickey for core from 139.178.89.65 port 50638 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:11.433000 audit[4998]: CRED_ACQ pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:11.433000 audit[4998]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcc26aea0 a2=3 a3=7f17f2703480 items=0 ppid=1 pid=4998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.433000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:11.436090 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:11.446651 systemd-logind[1790]: New session 9 of user core. Jun 25 16:29:11.450405 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:29:11.461000 audit[4998]: USER_START pid=4998 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:11.463000 audit[5011]: CRED_ACQ pid=5011 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:11.483000 audit[5012]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=5012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:11.483000 audit[5012]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe33b9e4b0 a2=0 a3=7ffe33b9e49c items=0 ppid=3352 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:11.484000 audit[5012]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=5012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:11.484000 audit[5012]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe33b9e4b0 a2=0 a3=7ffe33b9e49c items=0 ppid=3352 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:11.506000 audit[5014]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:11.506000 audit[5014]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffeb9af51f0 a2=0 a3=7ffeb9af51dc items=0 ppid=3352 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:11.529000 audit[5014]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:11.529000 audit[5014]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffeb9af51f0 a2=0 a3=7ffeb9af51dc items=0 ppid=3352 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:11.688924 systemd-networkd[1531]: caliab256417be3: Gained IPv6LL Jun 25 16:29:11.747386 sshd[4998]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:11.749000 audit[4998]: USER_END pid=4998 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:11.749000 audit[4998]: CRED_DISP pid=4998 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:11.752678 systemd-logind[1790]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:29:11.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.172:22-139.178.89.65:50638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:11.753539 systemd[1]: sshd@8-172.31.18.172:22-139.178.89.65:50638.service: Deactivated successfully. Jun 25 16:29:11.754583 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:29:11.757131 systemd-logind[1790]: Removed session 9. Jun 25 16:29:14.285737 containerd[1802]: time="2024-06-25T16:29:14.285688206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:14.288062 containerd[1802]: time="2024-06-25T16:29:14.287988987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:29:14.290576 containerd[1802]: time="2024-06-25T16:29:14.290535966Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:14.294050 containerd[1802]: time="2024-06-25T16:29:14.294002082Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:14.323889 containerd[1802]: time="2024-06-25T16:29:14.323843000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:14.325591 containerd[1802]: time="2024-06-25T16:29:14.325535681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.782925629s" Jun 25 16:29:14.325811 containerd[1802]: time="2024-06-25T16:29:14.325779907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:29:14.329065 containerd[1802]: time="2024-06-25T16:29:14.329032345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:29:14.356768 containerd[1802]: time="2024-06-25T16:29:14.356709771Z" level=info msg="CreateContainer within sandbox \"0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:29:14.390645 containerd[1802]: time="2024-06-25T16:29:14.390587199Z" level=info msg="CreateContainer within sandbox \"0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"08df93cd77925cb465677453ef00510b358c84d5dd2495b12981a4ac0103c692\"" Jun 25 16:29:14.391550 containerd[1802]: time="2024-06-25T16:29:14.391498824Z" level=info msg="StartContainer for \"08df93cd77925cb465677453ef00510b358c84d5dd2495b12981a4ac0103c692\"" Jun 25 16:29:14.446407 systemd[1]: Started cri-containerd-08df93cd77925cb465677453ef00510b358c84d5dd2495b12981a4ac0103c692.scope - libcontainer container 08df93cd77925cb465677453ef00510b358c84d5dd2495b12981a4ac0103c692. Jun 25 16:29:14.478000 audit: BPF prog-id=166 op=LOAD Jun 25 16:29:14.478000 audit: BPF prog-id=167 op=LOAD Jun 25 16:29:14.478000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4880 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038646639336364373739323563623436353637373435336566303035 Jun 25 16:29:14.479000 audit: BPF prog-id=168 op=LOAD Jun 25 16:29:14.479000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4880 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038646639336364373739323563623436353637373435336566303035 Jun 25 16:29:14.479000 audit: BPF prog-id=168 op=UNLOAD Jun 25 16:29:14.479000 audit: BPF prog-id=167 op=UNLOAD Jun 25 16:29:14.479000 audit: BPF prog-id=169 op=LOAD Jun 25 16:29:14.479000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4880 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038646639336364373739323563623436353637373435336566303035 Jun 25 16:29:14.529092 containerd[1802]: time="2024-06-25T16:29:14.529041830Z" level=info msg="StartContainer for \"08df93cd77925cb465677453ef00510b358c84d5dd2495b12981a4ac0103c692\" returns successfully" Jun 25 16:29:14.635000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:14.635000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000078740 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:29:14.635000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:14.635000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:14.635000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000df2840 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:29:14.635000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:15.573538 kubelet[3202]: I0625 16:29:15.572508 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-796c95c4bb-vmlqt" podStartSLOduration=31.697842944 podStartE2EDuration="35.57242771s" podCreationTimestamp="2024-06-25 16:28:40 +0000 UTC" firstStartedPulling="2024-06-25 16:29:10.451681283 +0000 UTC m=+49.743913553" lastFinishedPulling="2024-06-25 16:29:14.326266053 +0000 UTC m=+53.618498319" observedRunningTime="2024-06-25 16:29:15.554221006 +0000 UTC m=+54.846453282" watchObservedRunningTime="2024-06-25 16:29:15.57242771 +0000 UTC m=+54.864659986" Jun 25 16:29:16.066051 containerd[1802]: time="2024-06-25T16:29:16.065997846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:16.068043 containerd[1802]: time="2024-06-25T16:29:16.067985113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:29:16.070719 containerd[1802]: time="2024-06-25T16:29:16.070680801Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:16.073975 containerd[1802]: time="2024-06-25T16:29:16.073936114Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:16.076908 containerd[1802]: time="2024-06-25T16:29:16.076866455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:16.078026 containerd[1802]: time="2024-06-25T16:29:16.077955818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.744645655s" Jun 25 16:29:16.078203 containerd[1802]: time="2024-06-25T16:29:16.078156602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:29:16.081488 containerd[1802]: time="2024-06-25T16:29:16.081455546Z" level=info msg="CreateContainer within sandbox \"260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:29:16.115130 containerd[1802]: time="2024-06-25T16:29:16.115082732Z" level=info msg="CreateContainer within sandbox \"260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1289005472d6776ca7f37a1e3dfe662dd10790aa8475b4a6d8c501ae4f8f7fe0\"" Jun 25 16:29:16.116012 containerd[1802]: time="2024-06-25T16:29:16.115974632Z" level=info msg="StartContainer for \"1289005472d6776ca7f37a1e3dfe662dd10790aa8475b4a6d8c501ae4f8f7fe0\"" Jun 25 16:29:16.168424 systemd[1]: Started cri-containerd-1289005472d6776ca7f37a1e3dfe662dd10790aa8475b4a6d8c501ae4f8f7fe0.scope - libcontainer container 1289005472d6776ca7f37a1e3dfe662dd10790aa8475b4a6d8c501ae4f8f7fe0. Jun 25 16:29:16.211000 audit: BPF prog-id=170 op=LOAD Jun 25 16:29:16.212221 kernel: kauditd_printk_skb: 87 callbacks suppressed Jun 25 16:29:16.212308 kernel: audit: type=1334 audit(1719332956.211:607): prog-id=170 op=LOAD Jun 25 16:29:16.211000 audit[5104]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4583 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.216081 kernel: audit: type=1300 audit(1719332956.211:607): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4583 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383930303534373264363737366361376633376131653364666536 Jun 25 16:29:16.219207 kernel: audit: type=1327 audit(1719332956.211:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383930303534373264363737366361376633376131653364666536 Jun 25 16:29:16.211000 audit: BPF prog-id=171 op=LOAD Jun 25 16:29:16.220294 kernel: audit: type=1334 audit(1719332956.211:608): prog-id=171 op=LOAD Jun 25 16:29:16.211000 audit[5104]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4583 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.226271 kernel: audit: type=1300 audit(1719332956.211:608): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4583 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383930303534373264363737366361376633376131653364666536 Jun 25 16:29:16.211000 audit: BPF prog-id=171 op=UNLOAD Jun 25 16:29:16.211000 audit: BPF prog-id=170 op=UNLOAD Jun 25 16:29:16.230816 kernel: audit: type=1327 audit(1719332956.211:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383930303534373264363737366361376633376131653364666536 Jun 25 16:29:16.230879 kernel: audit: type=1334 audit(1719332956.211:609): prog-id=171 op=UNLOAD Jun 25 16:29:16.230917 kernel: audit: type=1334 audit(1719332956.211:610): prog-id=170 op=UNLOAD Jun 25 16:29:16.211000 audit: BPF prog-id=172 op=LOAD Jun 25 16:29:16.231723 kernel: audit: type=1334 audit(1719332956.211:611): prog-id=172 op=LOAD Jun 25 16:29:16.211000 audit[5104]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4583 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.234379 kernel: audit: type=1300 audit(1719332956.211:611): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4583 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383930303534373264363737366361376633376131653364666536 Jun 25 16:29:16.263250 containerd[1802]: time="2024-06-25T16:29:16.263124537Z" level=info msg="StartContainer for \"1289005472d6776ca7f37a1e3dfe662dd10790aa8475b4a6d8c501ae4f8f7fe0\" returns successfully" Jun 25 16:29:16.402000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:16.402000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:16.402000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c00faac0f0 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:29:16.402000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:16.402000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00fcd4210 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:29:16.402000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:16.409000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:16.409000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c00fcd4780 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:29:16.409000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:16.415000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:16.415000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00cc94800 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:29:16.415000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:16.418000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:16.418000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c00bd545c0 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:29:16.418000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:16.419000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:16.419000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c00fcd4cf0 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:29:16.419000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:16.489623 kubelet[3202]: I0625 16:29:16.489580 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-f7s8f" podStartSLOduration=28.62719166 podStartE2EDuration="36.489522741s" podCreationTimestamp="2024-06-25 16:28:40 +0000 UTC" firstStartedPulling="2024-06-25 16:29:08.216358371 +0000 UTC m=+47.508590627" lastFinishedPulling="2024-06-25 16:29:16.078689442 +0000 UTC m=+55.370921708" observedRunningTime="2024-06-25 16:29:16.487178353 +0000 UTC m=+55.779410628" watchObservedRunningTime="2024-06-25 16:29:16.489522741 +0000 UTC m=+55.781755016" Jun 25 16:29:16.778679 systemd[1]: Started sshd@9-172.31.18.172:22-139.178.89.65:35086.service - OpenSSH per-connection server daemon (139.178.89.65:35086). Jun 25 16:29:16.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.172:22-139.178.89.65:35086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:16.983000 audit[5140]: USER_ACCT pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.983641 sshd[5140]: Accepted publickey for core from 139.178.89.65 port 35086 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:16.985000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.985000 audit[5140]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8759e660 a2=3 a3=7f85b6130480 items=0 ppid=1 pid=5140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.985000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:16.987282 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:16.996112 systemd-logind[1790]: New session 10 of user core. Jun 25 16:29:16.999438 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:29:17.012000 audit[5140]: USER_START pid=5140 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:17.015000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:17.368769 kubelet[3202]: I0625 16:29:17.368720 3202 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:29:17.387594 kubelet[3202]: I0625 16:29:17.387558 3202 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:29:17.585121 sshd[5140]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:17.586000 audit[5140]: USER_END pid=5140 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:17.586000 audit[5140]: CRED_DISP pid=5140 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:17.588746 systemd[1]: sshd@9-172.31.18.172:22-139.178.89.65:35086.service: Deactivated successfully. Jun 25 16:29:17.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.172:22-139.178.89.65:35086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:17.589916 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:29:17.591577 systemd-logind[1790]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:29:17.593158 systemd-logind[1790]: Removed session 10. Jun 25 16:29:17.620309 systemd[1]: Started sshd@10-172.31.18.172:22-139.178.89.65:35100.service - OpenSSH per-connection server daemon (139.178.89.65:35100). Jun 25 16:29:17.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.172:22-139.178.89.65:35100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:17.781000 audit[5152]: USER_ACCT pid=5152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:17.781698 sshd[5152]: Accepted publickey for core from 139.178.89.65 port 35100 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:17.782000 audit[5152]: CRED_ACQ pid=5152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:17.782000 audit[5152]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5d6cd220 a2=3 a3=7f4784d71480 items=0 ppid=1 pid=5152 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.782000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:17.783406 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:17.788266 systemd-logind[1790]: New session 11 of user core. Jun 25 16:29:17.794462 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:29:17.800000 audit[5152]: USER_START pid=5152 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:17.802000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:18.112255 sshd[5152]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:18.115000 audit[5152]: USER_END pid=5152 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:18.116000 audit[5152]: CRED_DISP pid=5152 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:18.118723 systemd-logind[1790]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:29:18.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.172:22-139.178.89.65:35100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:18.120748 systemd[1]: sshd@10-172.31.18.172:22-139.178.89.65:35100.service: Deactivated successfully. Jun 25 16:29:18.121818 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:29:18.122800 systemd-logind[1790]: Removed session 11. Jun 25 16:29:18.149498 systemd[1]: Started sshd@11-172.31.18.172:22-139.178.89.65:35102.service - OpenSSH per-connection server daemon (139.178.89.65:35102). Jun 25 16:29:18.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.172:22-139.178.89.65:35102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:18.314000 audit[5162]: USER_ACCT pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:18.314595 sshd[5162]: Accepted publickey for core from 139.178.89.65 port 35102 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:18.315000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:18.315000 audit[5162]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc445aff0 a2=3 a3=7f8f475ab480 items=0 ppid=1 pid=5162 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:18.315000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:18.316423 sshd[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:18.322437 systemd-logind[1790]: New session 12 of user core. Jun 25 16:29:18.326498 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:29:18.332000 audit[5162]: USER_START pid=5162 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:18.334000 audit[5164]: CRED_ACQ pid=5164 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:18.552774 sshd[5162]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:18.555000 audit[5162]: USER_END pid=5162 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:18.556000 audit[5162]: CRED_DISP pid=5162 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:18.567274 systemd-logind[1790]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:29:18.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.172:22-139.178.89.65:35102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:18.567474 systemd[1]: sshd@11-172.31.18.172:22-139.178.89.65:35102.service: Deactivated successfully. Jun 25 16:29:18.570012 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:29:18.571789 systemd-logind[1790]: Removed session 12. Jun 25 16:29:20.047000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:20.047000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f2c8c0 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:29:20.047000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:20.053000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:20.053000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f2c940 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:29:20.053000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:20.064000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:20.064000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:20.064000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000826ca0 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:29:20.064000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:20.064000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000f2cce0 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:29:20.064000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:20.919727 containerd[1802]: time="2024-06-25T16:29:20.916312048Z" level=info msg="StopPodSandbox for \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\"" Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.052 [WARNING][5209] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"12cf6b5e-7d0a-4601-b457-20c98f952e2c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763", Pod:"coredns-76f75df574-xjl2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab256417be3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.052 [INFO][5209] k8s.go 608: Cleaning up netns ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.052 [INFO][5209] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" iface="eth0" netns="" Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.052 [INFO][5209] k8s.go 615: Releasing IP address(es) ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.052 [INFO][5209] utils.go 188: Calico CNI releasing IP address ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.122 [INFO][5218] ipam_plugin.go 411: Releasing address using handleID ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" HandleID="k8s-pod-network.6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.123 [INFO][5218] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.123 [INFO][5218] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.131 [WARNING][5218] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" HandleID="k8s-pod-network.6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.131 [INFO][5218] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" HandleID="k8s-pod-network.6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.133 [INFO][5218] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:21.137966 containerd[1802]: 2024-06-25 16:29:21.135 [INFO][5209] k8s.go 621: Teardown processing complete. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:21.139179 containerd[1802]: time="2024-06-25T16:29:21.139131069Z" level=info msg="TearDown network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\" successfully" Jun 25 16:29:21.139331 containerd[1802]: time="2024-06-25T16:29:21.139305692Z" level=info msg="StopPodSandbox for \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\" returns successfully" Jun 25 16:29:21.140223 containerd[1802]: time="2024-06-25T16:29:21.140160724Z" level=info msg="RemovePodSandbox for \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\"" Jun 25 16:29:21.188469 containerd[1802]: time="2024-06-25T16:29:21.140400012Z" level=info msg="Forcibly stopping sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\"" Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.233 [WARNING][5236] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"12cf6b5e-7d0a-4601-b457-20c98f952e2c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"168069e198db94c4b0c252222a675f8af6e11591f3a51c932fcb74d7d80ef763", Pod:"coredns-76f75df574-xjl2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab256417be3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.233 [INFO][5236] k8s.go 608: Cleaning up netns ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.234 [INFO][5236] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" iface="eth0" netns="" Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.234 [INFO][5236] k8s.go 615: Releasing IP address(es) ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.234 [INFO][5236] utils.go 188: Calico CNI releasing IP address ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.261 [INFO][5242] ipam_plugin.go 411: Releasing address using handleID ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" HandleID="k8s-pod-network.6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.261 [INFO][5242] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.261 [INFO][5242] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.269 [WARNING][5242] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" HandleID="k8s-pod-network.6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.269 [INFO][5242] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" HandleID="k8s-pod-network.6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--xjl2t-eth0" Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.272 [INFO][5242] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:21.276103 containerd[1802]: 2024-06-25 16:29:21.274 [INFO][5236] k8s.go 621: Teardown processing complete. ContainerID="6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc" Jun 25 16:29:21.277147 containerd[1802]: time="2024-06-25T16:29:21.277010056Z" level=info msg="TearDown network for sandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\" successfully" Jun 25 16:29:21.301425 containerd[1802]: time="2024-06-25T16:29:21.301366072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:21.320698 containerd[1802]: time="2024-06-25T16:29:21.320352734Z" level=info msg="RemovePodSandbox \"6e749802c6141146d3df4cb184d3b349219be020956f78bfa21c608ba06978dc\" returns successfully" Jun 25 16:29:21.322433 containerd[1802]: time="2024-06-25T16:29:21.321941727Z" level=info msg="StopPodSandbox for \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\"" Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.432 [WARNING][5263] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0", GenerateName:"calico-kube-controllers-796c95c4bb-", Namespace:"calico-system", SelfLink:"", UID:"1f02df65-1d0e-4c87-89a7-023877ca1122", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796c95c4bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6", Pod:"calico-kube-controllers-796c95c4bb-vmlqt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf7064d710c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.432 [INFO][5263] k8s.go 608: Cleaning up netns ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.432 [INFO][5263] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" iface="eth0" netns="" Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.432 [INFO][5263] k8s.go 615: Releasing IP address(es) ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.433 [INFO][5263] utils.go 188: Calico CNI releasing IP address ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.466 [INFO][5270] ipam_plugin.go 411: Releasing address using handleID ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" HandleID="k8s-pod-network.e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.466 [INFO][5270] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.466 [INFO][5270] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.472 [WARNING][5270] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" HandleID="k8s-pod-network.e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.472 [INFO][5270] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" HandleID="k8s-pod-network.e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.474 [INFO][5270] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:21.478183 containerd[1802]: 2024-06-25 16:29:21.475 [INFO][5263] k8s.go 621: Teardown processing complete. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:21.478183 containerd[1802]: time="2024-06-25T16:29:21.477396802Z" level=info msg="TearDown network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\" successfully" Jun 25 16:29:21.478183 containerd[1802]: time="2024-06-25T16:29:21.477435240Z" level=info msg="StopPodSandbox for \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\" returns successfully" Jun 25 16:29:21.479068 containerd[1802]: time="2024-06-25T16:29:21.479029417Z" level=info msg="RemovePodSandbox for \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\"" Jun 25 16:29:21.479261 containerd[1802]: time="2024-06-25T16:29:21.479181012Z" level=info msg="Forcibly stopping sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\"" Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.520 [WARNING][5289] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0", GenerateName:"calico-kube-controllers-796c95c4bb-", Namespace:"calico-system", SelfLink:"", UID:"1f02df65-1d0e-4c87-89a7-023877ca1122", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796c95c4bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"0037a9c1d59332e05eb8ff075fcca1659ea16a0b2ec7f5c53f8f5fb9b56a8df6", Pod:"calico-kube-controllers-796c95c4bb-vmlqt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf7064d710c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.521 [INFO][5289] k8s.go 608: Cleaning up netns ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.521 [INFO][5289] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" iface="eth0" netns="" Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.521 [INFO][5289] k8s.go 615: Releasing IP address(es) ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.521 [INFO][5289] utils.go 188: Calico CNI releasing IP address ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.543 [INFO][5295] ipam_plugin.go 411: Releasing address using handleID ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" HandleID="k8s-pod-network.e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.543 [INFO][5295] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.544 [INFO][5295] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.551 [WARNING][5295] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" HandleID="k8s-pod-network.e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.551 [INFO][5295] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" HandleID="k8s-pod-network.e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Workload="ip--172--31--18--172-k8s-calico--kube--controllers--796c95c4bb--vmlqt-eth0" Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.553 [INFO][5295] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:21.556417 containerd[1802]: 2024-06-25 16:29:21.554 [INFO][5289] k8s.go 621: Teardown processing complete. ContainerID="e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2" Jun 25 16:29:21.557391 containerd[1802]: time="2024-06-25T16:29:21.556494687Z" level=info msg="TearDown network for sandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\" successfully" Jun 25 16:29:21.562582 containerd[1802]: time="2024-06-25T16:29:21.562505569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:21.562731 containerd[1802]: time="2024-06-25T16:29:21.562614769Z" level=info msg="RemovePodSandbox \"e751d5eff07b6931d358564b63d4056b54c8d18f21f889c87c0539168eff28b2\" returns successfully" Jun 25 16:29:21.563299 containerd[1802]: time="2024-06-25T16:29:21.563253482Z" level=info msg="StopPodSandbox for \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\"" Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.612 [WARNING][5316] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb1b6fee-76dd-4b53-8bcb-f17d750a370e", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76", Pod:"csi-node-driver-f7s8f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicd51fa00698", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.612 [INFO][5316] k8s.go 608: Cleaning up netns ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.613 [INFO][5316] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" iface="eth0" netns="" Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.613 [INFO][5316] k8s.go 615: Releasing IP address(es) ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.613 [INFO][5316] utils.go 188: Calico CNI releasing IP address ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.639 [INFO][5322] ipam_plugin.go 411: Releasing address using handleID ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" HandleID="k8s-pod-network.61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.639 [INFO][5322] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.639 [INFO][5322] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.647 [WARNING][5322] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" HandleID="k8s-pod-network.61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.647 [INFO][5322] ipam_plugin.go 439: Releasing address using workloadID ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" HandleID="k8s-pod-network.61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.649 [INFO][5322] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:21.652420 containerd[1802]: 2024-06-25 16:29:21.650 [INFO][5316] k8s.go 621: Teardown processing complete. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:21.653607 containerd[1802]: time="2024-06-25T16:29:21.652470043Z" level=info msg="TearDown network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\" successfully" Jun 25 16:29:21.653607 containerd[1802]: time="2024-06-25T16:29:21.652508587Z" level=info msg="StopPodSandbox for \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\" returns successfully" Jun 25 16:29:21.656269 containerd[1802]: time="2024-06-25T16:29:21.654139120Z" level=info msg="RemovePodSandbox for \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\"" Jun 25 16:29:21.656269 containerd[1802]: time="2024-06-25T16:29:21.654259652Z" level=info msg="Forcibly stopping sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\"" Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.700 [WARNING][5341] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb1b6fee-76dd-4b53-8bcb-f17d750a370e", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"260d22d87deca8ceaa9ffb816407a2e6da52ef3bbb01d5da7c80e82576f9ce76", Pod:"csi-node-driver-f7s8f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicd51fa00698", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.700 [INFO][5341] k8s.go 608: Cleaning up netns ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.700 [INFO][5341] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" iface="eth0" netns="" Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.700 [INFO][5341] k8s.go 615: Releasing IP address(es) ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.700 [INFO][5341] utils.go 188: Calico CNI releasing IP address ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.722 [INFO][5347] ipam_plugin.go 411: Releasing address using handleID ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" HandleID="k8s-pod-network.61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.722 [INFO][5347] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.722 [INFO][5347] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.728 [WARNING][5347] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" HandleID="k8s-pod-network.61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.728 [INFO][5347] ipam_plugin.go 439: Releasing address using workloadID ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" HandleID="k8s-pod-network.61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Workload="ip--172--31--18--172-k8s-csi--node--driver--f7s8f-eth0" Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.729 [INFO][5347] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:21.733057 containerd[1802]: 2024-06-25 16:29:21.731 [INFO][5341] k8s.go 621: Teardown processing complete. ContainerID="61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a" Jun 25 16:29:21.733999 containerd[1802]: time="2024-06-25T16:29:21.733954326Z" level=info msg="TearDown network for sandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\" successfully" Jun 25 16:29:21.756918 containerd[1802]: time="2024-06-25T16:29:21.756873912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:21.757076 containerd[1802]: time="2024-06-25T16:29:21.756955796Z" level=info msg="RemovePodSandbox \"61192156feacbd66f4689c2edcb966feb8ae96c0789e93f4b8d3c9411d668a8a\" returns successfully" Jun 25 16:29:21.757748 containerd[1802]: time="2024-06-25T16:29:21.757708262Z" level=info msg="StopPodSandbox for \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\"" Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.798 [WARNING][5365] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ff87323-7adf-489a-b448-aa87f84c2db0", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70", Pod:"coredns-76f75df574-552qj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4134d681d19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.798 [INFO][5365] k8s.go 608: Cleaning up netns ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.798 [INFO][5365] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" iface="eth0" netns="" Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.798 [INFO][5365] k8s.go 615: Releasing IP address(es) ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.798 [INFO][5365] utils.go 188: Calico CNI releasing IP address ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.820 [INFO][5371] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" HandleID="k8s-pod-network.0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.822 [INFO][5371] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.822 [INFO][5371] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.828 [WARNING][5371] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" HandleID="k8s-pod-network.0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.828 [INFO][5371] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" HandleID="k8s-pod-network.0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.830 [INFO][5371] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:21.833171 containerd[1802]: 2024-06-25 16:29:21.831 [INFO][5365] k8s.go 621: Teardown processing complete. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:21.833845 containerd[1802]: time="2024-06-25T16:29:21.833231776Z" level=info msg="TearDown network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\" successfully" Jun 25 16:29:21.833845 containerd[1802]: time="2024-06-25T16:29:21.833271316Z" level=info msg="StopPodSandbox for \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\" returns successfully" Jun 25 16:29:21.834130 containerd[1802]: time="2024-06-25T16:29:21.834093502Z" level=info msg="RemovePodSandbox for \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\"" Jun 25 16:29:21.834246 containerd[1802]: time="2024-06-25T16:29:21.834135456Z" level=info msg="Forcibly stopping sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\"" Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.889 [WARNING][5389] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ff87323-7adf-489a-b448-aa87f84c2db0", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"89dd345113c319401240d2f691ffb696f4c85b25a3c395a400528cf034efed70", Pod:"coredns-76f75df574-552qj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4134d681d19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.889 [INFO][5389] k8s.go 608: Cleaning up netns ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.890 [INFO][5389] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" iface="eth0" netns="" Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.890 [INFO][5389] k8s.go 615: Releasing IP address(es) ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.890 [INFO][5389] utils.go 188: Calico CNI releasing IP address ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.922 [INFO][5395] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" HandleID="k8s-pod-network.0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.922 [INFO][5395] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.922 [INFO][5395] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.928 [WARNING][5395] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" HandleID="k8s-pod-network.0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.928 [INFO][5395] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" HandleID="k8s-pod-network.0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Workload="ip--172--31--18--172-k8s-coredns--76f75df574--552qj-eth0" Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.929 [INFO][5395] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:21.932921 containerd[1802]: 2024-06-25 16:29:21.931 [INFO][5389] k8s.go 621: Teardown processing complete. ContainerID="0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7" Jun 25 16:29:21.934144 containerd[1802]: time="2024-06-25T16:29:21.932974277Z" level=info msg="TearDown network for sandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\" successfully" Jun 25 16:29:21.939134 containerd[1802]: time="2024-06-25T16:29:21.939093139Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:21.939443 containerd[1802]: time="2024-06-25T16:29:21.939407505Z" level=info msg="RemovePodSandbox \"0d6c3c3cc4fa6cbfeac630857a2e5871a19baf58e7b761d97bece430029d63f7\" returns successfully" Jun 25 16:29:23.588021 systemd[1]: Started sshd@12-172.31.18.172:22-139.178.89.65:35104.service - OpenSSH per-connection server daemon (139.178.89.65:35104). Jun 25 16:29:23.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.172:22-139.178.89.65:35104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:23.594011 kernel: kauditd_printk_skb: 64 callbacks suppressed Jun 25 16:29:23.594072 kernel: audit: type=1130 audit(1719332963.588:649): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.172:22-139.178.89.65:35104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:23.811631 kernel: audit: type=1101 audit(1719332963.804:650): pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:23.811759 kernel: audit: type=1103 audit(1719332963.804:651): pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:23.811813 kernel: audit: type=1006 audit(1719332963.804:652): pid=5402 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 16:29:23.804000 audit[5402]: USER_ACCT pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:23.804000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:23.807647 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:23.804000 audit[5402]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff51dc0c20 a2=3 a3=7f82297db480 items=0 ppid=1 pid=5402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:23.816383 kernel: audit: type=1300 audit(1719332963.804:652): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff51dc0c20 a2=3 a3=7f82297db480 items=0 ppid=1 pid=5402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:23.816437 sshd[5402]: Accepted publickey for core from 139.178.89.65 port 35104 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:23.820720 kernel: audit: type=1327 audit(1719332963.804:652): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:23.804000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:23.816500 systemd-logind[1790]: New session 13 of user core. Jun 25 16:29:23.819444 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:29:23.826000 audit[5402]: USER_START pid=5402 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:23.829000 audit[5404]: CRED_ACQ pid=5404 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:23.831966 kernel: audit: type=1105 audit(1719332963.826:653): pid=5402 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:23.832046 kernel: audit: type=1103 audit(1719332963.829:654): pid=5404 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:24.029816 sshd[5402]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:24.031000 audit[5402]: USER_END pid=5402 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:24.032000 audit[5402]: CRED_DISP pid=5402 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:24.037711 kernel: audit: type=1106 audit(1719332964.031:655): pid=5402 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:24.037879 kernel: audit: type=1104 audit(1719332964.032:656): pid=5402 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:24.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.172:22-139.178.89.65:35104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:24.034984 systemd[1]: sshd@12-172.31.18.172:22-139.178.89.65:35104.service: Deactivated successfully. Jun 25 16:29:24.036258 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:29:24.039982 systemd-logind[1790]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:29:24.041297 systemd-logind[1790]: Removed session 13. Jun 25 16:29:24.204240 systemd[1]: run-containerd-runc-k8s.io-08df93cd77925cb465677453ef00510b358c84d5dd2495b12981a4ac0103c692-runc.43qO2I.mount: Deactivated successfully. Jun 25 16:29:29.071948 systemd[1]: Started sshd@13-172.31.18.172:22-139.178.89.65:42830.service - OpenSSH per-connection server daemon (139.178.89.65:42830). Jun 25 16:29:29.075623 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:29.076554 kernel: audit: type=1130 audit(1719332969.072:658): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.172:22-139.178.89.65:42830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.172:22-139.178.89.65:42830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.242225 kernel: audit: type=1101 audit(1719332969.238:659): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.242356 kernel: audit: type=1103 audit(1719332969.239:660): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.238000 audit[5448]: USER_ACCT pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.239000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.241393 sshd[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:29.242897 sshd[5448]: Accepted publickey for core from 139.178.89.65 port 42830 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:29.245391 kernel: audit: type=1006 audit(1719332969.240:661): pid=5448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 16:29:29.240000 audit[5448]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff33183c0 a2=3 a3=7f0bc1f65480 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:29.247953 kernel: audit: type=1300 audit(1719332969.240:661): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff33183c0 a2=3 a3=7f0bc1f65480 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:29.240000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:29.248836 kernel: audit: type=1327 audit(1719332969.240:661): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:29.252807 systemd-logind[1790]: New session 14 of user core. Jun 25 16:29:29.256416 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:29:29.262000 audit[5448]: USER_START pid=5448 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.265323 kernel: audit: type=1105 audit(1719332969.262:662): pid=5448 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.266259 kernel: audit: type=1103 audit(1719332969.265:663): pid=5450 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.265000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.464330 sshd[5448]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:29.466000 audit[5448]: USER_END pid=5448 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.469655 systemd[1]: sshd@13-172.31.18.172:22-139.178.89.65:42830.service: Deactivated successfully. Jun 25 16:29:29.470344 kernel: audit: type=1106 audit(1719332969.466:664): pid=5448 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.470610 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:29:29.466000 audit[5448]: CRED_DISP pid=5448 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:29.471601 systemd-logind[1790]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:29:29.472953 systemd-logind[1790]: Removed session 14. Jun 25 16:29:29.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.172:22-139.178.89.65:42830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.474259 kernel: audit: type=1104 audit(1719332969.466:665): pid=5448 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:34.513689 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:34.513820 kernel: audit: type=1130 audit(1719332974.511:667): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.172:22-139.178.89.65:42844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:34.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.172:22-139.178.89.65:42844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:34.511842 systemd[1]: Started sshd@14-172.31.18.172:22-139.178.89.65:42844.service - OpenSSH per-connection server daemon (139.178.89.65:42844). Jun 25 16:29:34.687000 audit[5463]: USER_ACCT pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:34.692406 sshd[5463]: Accepted publickey for core from 139.178.89.65 port 42844 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:34.693354 kernel: audit: type=1101 audit(1719332974.687:668): pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:34.694000 audit[5463]: CRED_ACQ pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:34.695620 sshd[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:34.700906 kernel: audit: type=1103 audit(1719332974.694:669): pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:34.701125 kernel: audit: type=1006 audit(1719332974.694:670): pid=5463 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:29:34.701449 kernel: audit: type=1300 audit(1719332974.694:670): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd358de2b0 a2=3 a3=7f245a256480 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:34.694000 audit[5463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd358de2b0 a2=3 a3=7f245a256480 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:34.705110 kernel: audit: type=1327 audit(1719332974.694:670): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:34.694000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:34.711604 systemd-logind[1790]: New session 15 of user core. Jun 25 16:29:34.715435 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:29:34.732000 audit[5463]: USER_START pid=5463 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:34.743519 kernel: audit: type=1105 audit(1719332974.732:671): pid=5463 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:34.744000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:34.749300 kernel: audit: type=1103 audit(1719332974.744:672): pid=5465 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:35.085425 sshd[5463]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:35.089000 audit[5463]: USER_END pid=5463 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:35.089000 audit[5463]: CRED_DISP pid=5463 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:35.094646 kernel: audit: type=1106 audit(1719332975.089:673): pid=5463 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:35.094728 kernel: audit: type=1104 audit(1719332975.089:674): pid=5463 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:35.095136 systemd[1]: sshd@14-172.31.18.172:22-139.178.89.65:42844.service: Deactivated successfully. Jun 25 16:29:35.096119 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:29:35.096950 systemd-logind[1790]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:29:35.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.172:22-139.178.89.65:42844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:35.098048 systemd-logind[1790]: Removed session 15. Jun 25 16:29:40.126050 systemd[1]: Started sshd@15-172.31.18.172:22-139.178.89.65:41436.service - OpenSSH per-connection server daemon (139.178.89.65:41436). Jun 25 16:29:40.134044 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:40.134226 kernel: audit: type=1130 audit(1719332980.126:676): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.172:22-139.178.89.65:41436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:40.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.172:22-139.178.89.65:41436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:40.304000 audit[5482]: USER_ACCT pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.307494 sshd[5482]: Accepted publickey for core from 139.178.89.65 port 41436 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:40.307000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.308451 sshd[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:40.310212 kernel: audit: type=1101 audit(1719332980.304:677): pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.310271 kernel: audit: type=1103 audit(1719332980.307:678): pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.310302 kernel: audit: type=1006 audit(1719332980.307:679): pid=5482 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:29:40.307000 audit[5482]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe18e106a0 a2=3 a3=7f54c72a9480 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:40.314438 kernel: audit: type=1300 audit(1719332980.307:679): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe18e106a0 a2=3 a3=7f54c72a9480 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:40.314759 kernel: audit: type=1327 audit(1719332980.307:679): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:40.307000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:40.319707 systemd-logind[1790]: New session 16 of user core. Jun 25 16:29:40.323411 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:29:40.329000 audit[5482]: USER_START pid=5482 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.332000 audit[5484]: CRED_ACQ pid=5484 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.335074 kernel: audit: type=1105 audit(1719332980.329:680): pid=5482 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.335150 kernel: audit: type=1103 audit(1719332980.332:681): pid=5484 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.654050 sshd[5482]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:40.655000 audit[5482]: USER_END pid=5482 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.655000 audit[5482]: CRED_DISP pid=5482 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.660796 kernel: audit: type=1106 audit(1719332980.655:682): pid=5482 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.660884 kernel: audit: type=1104 audit(1719332980.655:683): pid=5482 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.661436 systemd[1]: sshd@15-172.31.18.172:22-139.178.89.65:41436.service: Deactivated successfully. Jun 25 16:29:40.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.172:22-139.178.89.65:41436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:40.662465 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:29:40.663243 systemd-logind[1790]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:29:40.664234 systemd-logind[1790]: Removed session 16. Jun 25 16:29:40.706998 systemd[1]: Started sshd@16-172.31.18.172:22-139.178.89.65:41442.service - OpenSSH per-connection server daemon (139.178.89.65:41442). Jun 25 16:29:40.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.172:22-139.178.89.65:41442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:40.891000 audit[5494]: USER_ACCT pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.891466 sshd[5494]: Accepted publickey for core from 139.178.89.65 port 41442 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:40.892000 audit[5494]: CRED_ACQ pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.892000 audit[5494]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6911f0d0 a2=3 a3=7f95b2b67480 items=0 ppid=1 pid=5494 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:40.892000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:40.893869 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:40.923636 systemd-logind[1790]: New session 17 of user core. Jun 25 16:29:40.934756 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:29:40.944000 audit[5494]: USER_START pid=5494 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:40.947000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:41.815810 sshd[5494]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:41.821000 audit[5494]: USER_END pid=5494 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:41.821000 audit[5494]: CRED_DISP pid=5494 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:41.828933 systemd[1]: sshd@16-172.31.18.172:22-139.178.89.65:41442.service: Deactivated successfully. Jun 25 16:29:41.830729 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:29:41.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.172:22-139.178.89.65:41442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:41.832049 systemd-logind[1790]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:29:41.842739 systemd-logind[1790]: Removed session 17. Jun 25 16:29:41.855150 systemd[1]: Started sshd@17-172.31.18.172:22-139.178.89.65:41446.service - OpenSSH per-connection server daemon (139.178.89.65:41446). Jun 25 16:29:41.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.172:22-139.178.89.65:41446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:42.033525 sshd[5504]: Accepted publickey for core from 139.178.89.65 port 41446 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:42.033000 audit[5504]: USER_ACCT pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:42.034000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:42.035000 audit[5504]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1e376ae0 a2=3 a3=7fddf6c59480 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:42.035000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:42.035689 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:42.044540 systemd-logind[1790]: New session 18 of user core. Jun 25 16:29:42.048435 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:29:42.056000 audit[5504]: USER_START pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:42.059000 audit[5506]: CRED_ACQ pid=5506 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:45.293904 sshd[5504]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:45.309811 kernel: kauditd_printk_skb: 20 callbacks suppressed Jun 25 16:29:45.310122 kernel: audit: type=1106 audit(1719332985.298:700): pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:45.310179 kernel: audit: type=1104 audit(1719332985.298:701): pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:45.298000 audit[5504]: USER_END pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:45.298000 audit[5504]: CRED_DISP pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:45.332278 kernel: audit: type=1131 audit(1719332985.324:702): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.172:22-139.178.89.65:41446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:45.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.172:22-139.178.89.65:41446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:45.324356 systemd[1]: sshd@17-172.31.18.172:22-139.178.89.65:41446.service: Deactivated successfully. Jun 25 16:29:45.337793 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:29:45.343647 systemd-logind[1790]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:29:45.363761 kernel: audit: type=1130 audit(1719332985.352:703): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.172:22-139.178.89.65:41458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:45.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.172:22-139.178.89.65:41458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:45.351156 systemd[1]: Started sshd@18-172.31.18.172:22-139.178.89.65:41458.service - OpenSSH per-connection server daemon (139.178.89.65:41458). Jun 25 16:29:45.371790 systemd-logind[1790]: Removed session 18. Jun 25 16:29:45.423000 audit[5521]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=5521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:45.432421 kernel: audit: type=1325 audit(1719332985.423:704): table=filter:113 family=2 entries=20 op=nft_register_rule pid=5521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:45.432709 kernel: audit: type=1300 audit(1719332985.423:704): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc90616550 a2=0 a3=7ffc9061653c items=0 ppid=3352 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:45.423000 audit[5521]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc90616550 a2=0 a3=7ffc9061653c items=0 ppid=3352 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:45.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:45.443216 kernel: audit: type=1327 audit(1719332985.423:704): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:45.545130 kernel: audit: type=1325 audit(1719332985.526:705): table=nat:114 family=2 entries=20 op=nft_register_rule pid=5521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:45.545495 kernel: audit: type=1300 audit(1719332985.526:705): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc90616550 a2=0 a3=0 items=0 ppid=3352 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:45.545553 kernel: audit: type=1327 audit(1719332985.526:705): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:45.526000 audit[5521]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=5521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:45.526000 audit[5521]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc90616550 a2=0 a3=0 items=0 ppid=3352 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:45.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:45.647000 audit[5519]: USER_ACCT pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:45.649000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:45.649000 audit[5519]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc702001f0 a2=3 a3=7f1b01866480 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:45.649000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:45.650538 sshd[5519]: Accepted publickey for core from 139.178.89.65 port 41458 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:45.651512 sshd[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:45.672261 systemd-logind[1790]: New session 19 of user core. Jun 25 16:29:45.674520 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:29:45.701000 audit[5519]: USER_START pid=5519 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:45.703000 audit[5524]: CRED_ACQ pid=5524 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:45.714000 audit[5523]: NETFILTER_CFG table=filter:115 family=2 entries=33 op=nft_register_rule pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:45.714000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffd17fea020 a2=0 a3=7ffd17fea00c items=0 ppid=3352 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:45.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:45.719000 audit[5523]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:45.719000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd17fea020 a2=0 a3=0 items=0 ppid=3352 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:45.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:45.753512 kubelet[3202]: I0625 16:29:45.753462 3202 topology_manager.go:215] "Topology Admit Handler" podUID="7e6cb891-8c42-4674-97c5-4ea18563ffeb" podNamespace="calico-apiserver" podName="calico-apiserver-576b46f495-77k8v" Jun 25 16:29:45.777288 systemd[1]: Created slice kubepods-besteffort-pod7e6cb891_8c42_4674_97c5_4ea18563ffeb.slice - libcontainer container kubepods-besteffort-pod7e6cb891_8c42_4674_97c5_4ea18563ffeb.slice. Jun 25 16:29:45.904130 kubelet[3202]: I0625 16:29:45.904026 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e6cb891-8c42-4674-97c5-4ea18563ffeb-calico-apiserver-certs\") pod \"calico-apiserver-576b46f495-77k8v\" (UID: \"7e6cb891-8c42-4674-97c5-4ea18563ffeb\") " pod="calico-apiserver/calico-apiserver-576b46f495-77k8v" Jun 25 16:29:45.907537 kubelet[3202]: I0625 16:29:45.907493 3202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptj29\" (UniqueName: \"kubernetes.io/projected/7e6cb891-8c42-4674-97c5-4ea18563ffeb-kube-api-access-ptj29\") pod \"calico-apiserver-576b46f495-77k8v\" (UID: \"7e6cb891-8c42-4674-97c5-4ea18563ffeb\") " pod="calico-apiserver/calico-apiserver-576b46f495-77k8v" Jun 25 16:29:46.008638 kubelet[3202]: E0625 16:29:46.008596 3202 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:29:46.108097 kubelet[3202]: E0625 16:29:46.108057 3202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e6cb891-8c42-4674-97c5-4ea18563ffeb-calico-apiserver-certs podName:7e6cb891-8c42-4674-97c5-4ea18563ffeb nodeName:}" failed. No retries permitted until 2024-06-25 16:29:46.521990774 +0000 UTC m=+85.814223050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/7e6cb891-8c42-4674-97c5-4ea18563ffeb-calico-apiserver-certs") pod "calico-apiserver-576b46f495-77k8v" (UID: "7e6cb891-8c42-4674-97c5-4ea18563ffeb") : secret "calico-apiserver-certs" not found Jun 25 16:29:46.686333 containerd[1802]: time="2024-06-25T16:29:46.686271461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576b46f495-77k8v,Uid:7e6cb891-8c42-4674-97c5-4ea18563ffeb,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:29:46.899000 audit[5548]: NETFILTER_CFG table=filter:117 family=2 entries=34 op=nft_register_rule pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:46.899000 audit[5548]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffdc4db37d0 a2=0 a3=7ffdc4db37bc items=0 ppid=3352 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:46.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:46.901000 audit[5548]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:46.901000 audit[5548]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdc4db37d0 a2=0 a3=0 items=0 ppid=3352 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:46.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:47.183822 sshd[5519]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:47.184000 audit[5519]: USER_END pid=5519 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:47.185000 audit[5519]: CRED_DISP pid=5519 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:47.187962 systemd-logind[1790]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:29:47.189888 systemd[1]: sshd@18-172.31.18.172:22-139.178.89.65:41458.service: Deactivated successfully. Jun 25 16:29:47.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.172:22-139.178.89.65:41458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:47.190921 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:29:47.192753 systemd-logind[1790]: Removed session 19. Jun 25 16:29:47.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.172:22-139.178.89.65:54412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:47.222783 systemd[1]: Started sshd@19-172.31.18.172:22-139.178.89.65:54412.service - OpenSSH per-connection server daemon (139.178.89.65:54412). Jun 25 16:29:47.243622 (udev-worker)[5560]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:29:47.254293 systemd-networkd[1531]: calic9322cf97e5: Link UP Jun 25 16:29:47.273108 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:29:47.275340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic9322cf97e5: link becomes ready Jun 25 16:29:47.272565 systemd-networkd[1531]: calic9322cf97e5: Gained carrier Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.001 [INFO][5537] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0 calico-apiserver-576b46f495- calico-apiserver 7e6cb891-8c42-4674-97c5-4ea18563ffeb 1062 0 2024-06-25 16:29:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:576b46f495 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-172 calico-apiserver-576b46f495-77k8v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic9322cf97e5 [] []}} ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Namespace="calico-apiserver" Pod="calico-apiserver-576b46f495-77k8v" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.003 [INFO][5537] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Namespace="calico-apiserver" Pod="calico-apiserver-576b46f495-77k8v" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.106 [INFO][5551] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" HandleID="k8s-pod-network.defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Workload="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.141 [INFO][5551] ipam_plugin.go 264: Auto assigning IP ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" HandleID="k8s-pod-network.defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Workload="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000300d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-172", "pod":"calico-apiserver-576b46f495-77k8v", "timestamp":"2024-06-25 16:29:47.106463082 +0000 UTC"}, Hostname:"ip-172-31-18-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.141 [INFO][5551] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.142 [INFO][5551] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.142 [INFO][5551] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-172' Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.146 [INFO][5551] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" host="ip-172-31-18-172" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.155 [INFO][5551] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-172" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.167 [INFO][5551] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.172 [INFO][5551] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.177 [INFO][5551] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ip-172-31-18-172" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.178 [INFO][5551] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" host="ip-172-31-18-172" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.195 [INFO][5551] ipam.go 1685: Creating new handle: k8s-pod-network.defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376 Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.205 [INFO][5551] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" host="ip-172-31-18-172" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.214 [INFO][5551] ipam.go 1216: Successfully claimed IPs: [192.168.52.197/26] block=192.168.52.192/26 handle="k8s-pod-network.defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" host="ip-172-31-18-172" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.214 [INFO][5551] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.197/26] handle="k8s-pod-network.defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" host="ip-172-31-18-172" Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.214 [INFO][5551] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:47.298763 containerd[1802]: 2024-06-25 16:29:47.214 [INFO][5551] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.197/26] IPv6=[] ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" HandleID="k8s-pod-network.defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Workload="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" Jun 25 16:29:47.300091 containerd[1802]: 2024-06-25 16:29:47.220 [INFO][5537] k8s.go 386: Populated endpoint ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Namespace="calico-apiserver" Pod="calico-apiserver-576b46f495-77k8v" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0", GenerateName:"calico-apiserver-576b46f495-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e6cb891-8c42-4674-97c5-4ea18563ffeb", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"576b46f495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"", Pod:"calico-apiserver-576b46f495-77k8v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9322cf97e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:47.300091 containerd[1802]: 2024-06-25 16:29:47.220 [INFO][5537] k8s.go 387: Calico CNI using IPs: [192.168.52.197/32] ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Namespace="calico-apiserver" Pod="calico-apiserver-576b46f495-77k8v" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" Jun 25 16:29:47.300091 containerd[1802]: 2024-06-25 16:29:47.220 [INFO][5537] dataplane_linux.go 68: Setting the host side veth name to calic9322cf97e5 ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Namespace="calico-apiserver" Pod="calico-apiserver-576b46f495-77k8v" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" Jun 25 16:29:47.300091 containerd[1802]: 2024-06-25 16:29:47.268 [INFO][5537] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Namespace="calico-apiserver" Pod="calico-apiserver-576b46f495-77k8v" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" Jun 25 16:29:47.300091 containerd[1802]: 2024-06-25 16:29:47.273 [INFO][5537] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Namespace="calico-apiserver" Pod="calico-apiserver-576b46f495-77k8v" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0", GenerateName:"calico-apiserver-576b46f495-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e6cb891-8c42-4674-97c5-4ea18563ffeb", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"576b46f495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-172", ContainerID:"defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376", Pod:"calico-apiserver-576b46f495-77k8v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9322cf97e5", MAC:"86:59:ab:79:6f:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:47.300091 containerd[1802]: 2024-06-25 16:29:47.295 [INFO][5537] k8s.go 500: Wrote updated endpoint to datastore ContainerID="defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376" Namespace="calico-apiserver" Pod="calico-apiserver-576b46f495-77k8v" WorkloadEndpoint="ip--172--31--18--172-k8s-calico--apiserver--576b46f495--77k8v-eth0" Jun 25 16:29:47.371000 audit[5577]: NETFILTER_CFG table=filter:119 family=2 entries=47 op=nft_register_chain pid=5577 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:47.371000 audit[5577]: SYSCALL arch=c000003e syscall=46 success=yes exit=25108 a0=3 a1=7ffcba8ec950 a2=0 a3=7ffcba8ec93c items=0 ppid=4240 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:47.371000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:47.433421 containerd[1802]: time="2024-06-25T16:29:47.433296257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:47.433421 containerd[1802]: time="2024-06-25T16:29:47.433366126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:47.434116 containerd[1802]: time="2024-06-25T16:29:47.433399550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:47.434116 containerd[1802]: time="2024-06-25T16:29:47.433950796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:47.457000 audit[5559]: USER_ACCT pid=5559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:47.459565 sshd[5559]: Accepted publickey for core from 139.178.89.65 port 54412 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:47.460000 audit[5559]: CRED_ACQ pid=5559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:47.460000 audit[5559]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce1e8f4a0 a2=3 a3=7f1d703aa480 items=0 ppid=1 pid=5559 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:47.460000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:47.461394 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:47.472879 systemd-logind[1790]: New session 20 of user core. Jun 25 16:29:47.479112 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:29:47.491000 audit[5559]: USER_START pid=5559 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:47.494000 audit[5602]: CRED_ACQ pid=5602 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:47.563527 systemd[1]: run-containerd-runc-k8s.io-defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376-runc.XCE6Gl.mount: Deactivated successfully. Jun 25 16:29:47.575364 systemd[1]: Started cri-containerd-defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376.scope - libcontainer container defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376. Jun 25 16:29:47.623000 audit: BPF prog-id=173 op=LOAD Jun 25 16:29:47.625000 audit: BPF prog-id=174 op=LOAD Jun 25 16:29:47.625000 audit[5595]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1988 a2=78 a3=0 items=0 ppid=5586 pid=5595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:47.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465666235326461633866353561666637393738373831356366653034 Jun 25 16:29:47.626000 audit: BPF prog-id=175 op=LOAD Jun 25 16:29:47.626000 audit[5595]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b1720 a2=78 a3=0 items=0 ppid=5586 pid=5595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:47.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465666235326461633866353561666637393738373831356366653034 Jun 25 16:29:47.626000 audit: BPF prog-id=175 op=UNLOAD Jun 25 16:29:47.626000 audit: BPF prog-id=174 op=UNLOAD Jun 25 16:29:47.627000 audit: BPF prog-id=176 op=LOAD Jun 25 16:29:47.627000 audit[5595]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1be0 a2=78 a3=0 items=0 ppid=5586 pid=5595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:47.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465666235326461633866353561666637393738373831356366653034 Jun 25 16:29:47.693155 containerd[1802]: time="2024-06-25T16:29:47.693037395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576b46f495-77k8v,Uid:7e6cb891-8c42-4674-97c5-4ea18563ffeb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376\"" Jun 25 16:29:47.698431 containerd[1802]: time="2024-06-25T16:29:47.698103648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:29:47.819561 sshd[5559]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:47.822000 audit[5559]: USER_END pid=5559 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:47.822000 audit[5559]: CRED_DISP pid=5559 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:47.824991 systemd[1]: sshd@19-172.31.18.172:22-139.178.89.65:54412.service: Deactivated successfully. Jun 25 16:29:47.826170 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:29:47.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.172:22-139.178.89.65:54412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:47.827607 systemd-logind[1790]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:29:47.828660 systemd-logind[1790]: Removed session 20. Jun 25 16:29:48.234586 systemd[1]: run-containerd-runc-k8s.io-08df93cd77925cb465677453ef00510b358c84d5dd2495b12981a4ac0103c692-runc.v5oSHD.mount: Deactivated successfully. Jun 25 16:29:48.552343 systemd-networkd[1531]: calic9322cf97e5: Gained IPv6LL Jun 25 16:29:50.322030 systemd[1]: run-containerd-runc-k8s.io-4e19be48daa643b128158c3525d15503df222db4c04a1d9e678b013ede6dede7-runc.31pAc4.mount: Deactivated successfully. Jun 25 16:29:52.232105 containerd[1802]: time="2024-06-25T16:29:52.232052942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:52.233973 containerd[1802]: time="2024-06-25T16:29:52.233914827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:29:52.236751 containerd[1802]: time="2024-06-25T16:29:52.236717176Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:52.239594 containerd[1802]: time="2024-06-25T16:29:52.239563575Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:52.242975 containerd[1802]: time="2024-06-25T16:29:52.242942808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:52.244097 containerd[1802]: time="2024-06-25T16:29:52.244059298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.545185275s" Jun 25 16:29:52.244267 containerd[1802]: time="2024-06-25T16:29:52.244240193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:29:52.246700 containerd[1802]: time="2024-06-25T16:29:52.246669263Z" level=info msg="CreateContainer within sandbox \"defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:29:52.277675 containerd[1802]: time="2024-06-25T16:29:52.277619200Z" level=info msg="CreateContainer within sandbox \"defb52dac8f55aff79787815cfe04fa97d39ad8749b89b8996e3ba75e63e7376\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"382d5307feaa6fafb03c3ce1a970d62ba48a52fc2faaa169b1cac553db7b0267\"" Jun 25 16:29:52.280115 containerd[1802]: time="2024-06-25T16:29:52.278531984Z" level=info msg="StartContainer for \"382d5307feaa6fafb03c3ce1a970d62ba48a52fc2faaa169b1cac553db7b0267\"" Jun 25 16:29:52.336389 systemd[1]: run-containerd-runc-k8s.io-382d5307feaa6fafb03c3ce1a970d62ba48a52fc2faaa169b1cac553db7b0267-runc.GbM678.mount: Deactivated successfully. Jun 25 16:29:52.342517 systemd[1]: Started cri-containerd-382d5307feaa6fafb03c3ce1a970d62ba48a52fc2faaa169b1cac553db7b0267.scope - libcontainer container 382d5307feaa6fafb03c3ce1a970d62ba48a52fc2faaa169b1cac553db7b0267. Jun 25 16:29:52.362255 kernel: kauditd_printk_skb: 48 callbacks suppressed Jun 25 16:29:52.362410 kernel: audit: type=1334 audit(1719332992.360:734): prog-id=177 op=LOAD Jun 25 16:29:52.362455 kernel: audit: type=1334 audit(1719332992.360:735): prog-id=178 op=LOAD Jun 25 16:29:52.360000 audit: BPF prog-id=177 op=LOAD Jun 25 16:29:52.363986 kernel: audit: type=1300 audit(1719332992.360:735): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=5586 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.360000 audit: BPF prog-id=178 op=LOAD Jun 25 16:29:52.360000 audit[5688]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=5586 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.369695 kernel: audit: type=1327 audit(1719332992.360:735): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338326435333037666561613666616662303363336365316139373064 Jun 25 16:29:52.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338326435333037666561613666616662303363336365316139373064 Jun 25 16:29:52.371020 kernel: audit: type=1334 audit(1719332992.360:736): prog-id=179 op=LOAD Jun 25 16:29:52.360000 audit: BPF prog-id=179 op=LOAD Jun 25 16:29:52.360000 audit[5688]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=5586 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.377389 kernel: audit: type=1300 audit(1719332992.360:736): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=5586 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.380026 kernel: audit: type=1327 audit(1719332992.360:736): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338326435333037666561613666616662303363336365316139373064 Jun 25 16:29:52.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338326435333037666561613666616662303363336365316139373064 Jun 25 16:29:52.387212 kernel: audit: type=1334 audit(1719332992.360:737): prog-id=179 op=UNLOAD Jun 25 16:29:52.360000 audit: BPF prog-id=179 op=UNLOAD Jun 25 16:29:52.360000 audit: BPF prog-id=178 op=UNLOAD Jun 25 16:29:52.360000 audit: BPF prog-id=180 op=LOAD Jun 25 16:29:52.389213 kernel: audit: type=1334 audit(1719332992.360:738): prog-id=178 op=UNLOAD Jun 25 16:29:52.389265 kernel: audit: type=1334 audit(1719332992.360:739): prog-id=180 op=LOAD Jun 25 16:29:52.360000 audit[5688]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=5586 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338326435333037666561613666616662303363336365316139373064 Jun 25 16:29:52.442205 containerd[1802]: time="2024-06-25T16:29:52.442140957Z" level=info msg="StartContainer for \"382d5307feaa6fafb03c3ce1a970d62ba48a52fc2faaa169b1cac553db7b0267\" returns successfully" Jun 25 16:29:52.629254 kubelet[3202]: I0625 16:29:52.629167 3202 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-576b46f495-77k8v" podStartSLOduration=3.067473041 podStartE2EDuration="7.61516055s" podCreationTimestamp="2024-06-25 16:29:45 +0000 UTC" firstStartedPulling="2024-06-25 16:29:47.696959625 +0000 UTC m=+86.989191880" lastFinishedPulling="2024-06-25 16:29:52.244647118 +0000 UTC m=+91.536879389" observedRunningTime="2024-06-25 16:29:52.612346484 +0000 UTC m=+91.904578751" watchObservedRunningTime="2024-06-25 16:29:52.61516055 +0000 UTC m=+91.907392826" Jun 25 16:29:52.667000 audit[5721]: NETFILTER_CFG table=filter:120 family=2 entries=34 op=nft_register_rule pid=5721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:52.667000 audit[5721]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffe9ab2be50 a2=0 a3=7ffe9ab2be3c items=0 ppid=3352 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:52.669000 audit[5721]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=5721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:52.669000 audit[5721]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe9ab2be50 a2=0 a3=0 items=0 ppid=3352 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.669000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:52.855833 systemd[1]: Started sshd@20-172.31.18.172:22-139.178.89.65:54422.service - OpenSSH per-connection server daemon (139.178.89.65:54422). Jun 25 16:29:52.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.172:22-139.178.89.65:54422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:53.065308 sshd[5723]: Accepted publickey for core from 139.178.89.65 port 54422 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:53.065000 audit[5723]: USER_ACCT pid=5723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.066000 audit[5723]: CRED_ACQ pid=5723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.066000 audit[5723]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3bbfb1f0 a2=3 a3=7fb570230480 items=0 ppid=1 pid=5723 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:53.066000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:53.068781 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:53.076832 systemd-logind[1790]: New session 21 of user core. Jun 25 16:29:53.081434 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:29:53.092000 audit[5723]: USER_START pid=5723 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.096000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.701000 audit[5734]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:53.701000 audit[5734]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff09f2ba10 a2=0 a3=7fff09f2b9fc items=0 ppid=3352 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:53.701000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:53.716000 audit[5734]: NETFILTER_CFG table=nat:123 family=2 entries=104 op=nft_register_chain pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:53.716000 audit[5734]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff09f2ba10 a2=0 a3=7fff09f2b9fc items=0 ppid=3352 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:53.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:53.765971 sshd[5723]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:53.774000 audit[5723]: USER_END pid=5723 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.774000 audit[5723]: CRED_DISP pid=5723 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.778772 systemd[1]: sshd@20-172.31.18.172:22-139.178.89.65:54422.service: Deactivated successfully. Jun 25 16:29:53.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.172:22-139.178.89.65:54422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:53.780231 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:29:53.781378 systemd-logind[1790]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:29:53.782705 systemd-logind[1790]: Removed session 21. Jun 25 16:29:54.205860 systemd[1]: run-containerd-runc-k8s.io-08df93cd77925cb465677453ef00510b358c84d5dd2495b12981a4ac0103c692-runc.GWwBHw.mount: Deactivated successfully. Jun 25 16:29:54.280000 audit[5757]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=5757 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:54.280000 audit[5757]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdf272b620 a2=0 a3=7ffdf272b60c items=0 ppid=3352 pid=5757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:54.280000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:54.293000 audit[5757]: NETFILTER_CFG table=nat:125 family=2 entries=51 op=nft_register_chain pid=5757 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:54.293000 audit[5757]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffdf272b620 a2=0 a3=7ffdf272b60c items=0 ppid=3352 pid=5757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:54.293000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:56.644000 audit[5759]: NETFILTER_CFG table=filter:126 family=2 entries=8 op=nft_register_rule pid=5759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:56.644000 audit[5759]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc70f72630 a2=0 a3=7ffc70f7261c items=0 ppid=3352 pid=5759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:56.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:56.647000 audit[5759]: NETFILTER_CFG table=nat:127 family=2 entries=54 op=nft_register_rule pid=5759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:56.647000 audit[5759]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffc70f72630 a2=0 a3=7ffc70f7261c items=0 ppid=3352 pid=5759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:56.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:57.235000 audit[5761]: NETFILTER_CFG table=filter:128 family=2 entries=8 op=nft_register_rule pid=5761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:57.235000 audit[5761]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe8d0117d0 a2=0 a3=7ffe8d0117bc items=0 ppid=3352 pid=5761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:57.236000 audit[5761]: NETFILTER_CFG table=nat:129 family=2 entries=58 op=nft_register_chain pid=5761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:57.236000 audit[5761]: SYSCALL arch=c000003e syscall=46 success=yes exit=20452 a0=3 a1=7ffe8d0117d0 a2=0 a3=7ffe8d0117bc items=0 ppid=3352 pid=5761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:58.809167 systemd[1]: Started sshd@21-172.31.18.172:22-139.178.89.65:48530.service - OpenSSH per-connection server daemon (139.178.89.65:48530). Jun 25 16:29:58.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.172:22-139.178.89.65:48530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:58.810495 kernel: kauditd_printk_skb: 43 callbacks suppressed Jun 25 16:29:58.810550 kernel: audit: type=1130 audit(1719332998.809:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.172:22-139.178.89.65:48530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:59.044257 kernel: audit: type=1101 audit(1719332999.037:760): pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.044378 kernel: audit: type=1103 audit(1719332999.039:761): pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.044420 kernel: audit: type=1006 audit(1719332999.039:762): pid=5770 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 16:29:59.037000 audit[5770]: USER_ACCT pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.039000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.042259 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:59.045226 kernel: audit: type=1300 audit(1719332999.039:762): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd208a6d30 a2=3 a3=7f7f54336480 items=0 ppid=1 pid=5770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.039000 audit[5770]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd208a6d30 a2=3 a3=7f7f54336480 items=0 ppid=1 pid=5770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.045598 sshd[5770]: Accepted publickey for core from 139.178.89.65 port 48530 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:29:59.049708 kernel: audit: type=1327 audit(1719332999.039:762): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:59.039000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:59.059567 systemd-logind[1790]: New session 22 of user core. Jun 25 16:29:59.066465 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:29:59.074000 audit[5770]: USER_START pid=5770 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.079797 kernel: audit: type=1105 audit(1719332999.074:763): pid=5770 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.079911 kernel: audit: type=1103 audit(1719332999.078:764): pid=5772 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.078000 audit[5772]: CRED_ACQ pid=5772 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.548314 sshd[5770]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:59.557223 kernel: audit: type=1106 audit(1719332999.550:765): pid=5770 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.557339 kernel: audit: type=1104 audit(1719332999.550:766): pid=5770 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.550000 audit[5770]: USER_END pid=5770 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.550000 audit[5770]: CRED_DISP pid=5770 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:59.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.172:22-139.178.89.65:48530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:59.555838 systemd[1]: sshd@21-172.31.18.172:22-139.178.89.65:48530.service: Deactivated successfully. Jun 25 16:29:59.556812 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:29:59.558796 systemd-logind[1790]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:29:59.560113 systemd-logind[1790]: Removed session 22. Jun 25 16:30:04.587439 systemd[1]: Started sshd@22-172.31.18.172:22-139.178.89.65:48542.service - OpenSSH per-connection server daemon (139.178.89.65:48542). Jun 25 16:30:04.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.172:22-139.178.89.65:48542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:04.596417 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:30:04.596777 kernel: audit: type=1130 audit(1719333004.588:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.172:22-139.178.89.65:48542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:04.764000 audit[5782]: USER_ACCT pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.765243 sshd[5782]: Accepted publickey for core from 139.178.89.65 port 48542 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:30:04.770221 kernel: audit: type=1101 audit(1719333004.764:769): pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.770373 kernel: audit: type=1103 audit(1719333004.767:770): pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.767000 audit[5782]: CRED_ACQ pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.769034 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:04.772012 kernel: audit: type=1006 audit(1719333004.768:771): pid=5782 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:30:04.772075 kernel: audit: type=1300 audit(1719333004.768:771): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8f4f6750 a2=3 a3=7f5ba6f81480 items=0 ppid=1 pid=5782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:04.768000 audit[5782]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8f4f6750 a2=3 a3=7f5ba6f81480 items=0 ppid=1 pid=5782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:04.775129 kernel: audit: type=1327 audit(1719333004.768:771): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:04.768000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:04.780700 systemd-logind[1790]: New session 23 of user core. Jun 25 16:30:04.783401 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:30:04.793000 audit[5782]: USER_START pid=5782 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.798000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.801214 kernel: audit: type=1105 audit(1719333004.793:772): pid=5782 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.801303 kernel: audit: type=1103 audit(1719333004.798:773): pid=5784 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:05.090495 sshd[5782]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:05.092000 audit[5782]: USER_END pid=5782 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:05.096262 kernel: audit: type=1106 audit(1719333005.092:774): pid=5782 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:05.095474 systemd-logind[1790]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:30:05.092000 audit[5782]: CRED_DISP pid=5782 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:05.097155 systemd[1]: sshd@22-172.31.18.172:22-139.178.89.65:48542.service: Deactivated successfully. Jun 25 16:30:05.100317 kernel: audit: type=1104 audit(1719333005.092:775): pid=5782 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:05.098426 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:30:05.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.172:22-139.178.89.65:48542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:05.100680 systemd-logind[1790]: Removed session 23. Jun 25 16:30:10.147771 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:30:10.148261 kernel: audit: type=1130 audit(1719333010.141:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.172:22-139.178.89.65:38018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:10.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.172:22-139.178.89.65:38018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:10.141010 systemd[1]: Started sshd@23-172.31.18.172:22-139.178.89.65:38018.service - OpenSSH per-connection server daemon (139.178.89.65:38018). Jun 25 16:30:10.321000 audit[5802]: USER_ACCT pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.321932 sshd[5802]: Accepted publickey for core from 139.178.89.65 port 38018 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:30:10.324311 kernel: audit: type=1101 audit(1719333010.321:778): pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.324000 audit[5802]: CRED_ACQ pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.325464 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:10.332870 kernel: audit: type=1103 audit(1719333010.324:779): pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.333304 kernel: audit: type=1006 audit(1719333010.324:780): pid=5802 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:30:10.333373 kernel: audit: type=1300 audit(1719333010.324:780): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb9434830 a2=3 a3=7f700ef7b480 items=0 ppid=1 pid=5802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:10.324000 audit[5802]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb9434830 a2=3 a3=7f700ef7b480 items=0 ppid=1 pid=5802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:10.324000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:10.335282 kernel: audit: type=1327 audit(1719333010.324:780): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:10.339303 systemd-logind[1790]: New session 24 of user core. Jun 25 16:30:10.344083 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:30:10.358265 kernel: audit: type=1105 audit(1719333010.352:781): pid=5802 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.352000 audit[5802]: USER_START pid=5802 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.358000 audit[5804]: CRED_ACQ pid=5804 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.362373 kernel: audit: type=1103 audit(1719333010.358:782): pid=5804 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.684879 sshd[5802]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:10.687000 audit[5802]: USER_END pid=5802 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.687000 audit[5802]: CRED_DISP pid=5802 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.692759 kernel: audit: type=1106 audit(1719333010.687:783): pid=5802 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.692868 kernel: audit: type=1104 audit(1719333010.687:784): pid=5802 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.695149 systemd[1]: sshd@23-172.31.18.172:22-139.178.89.65:38018.service: Deactivated successfully. Jun 25 16:30:10.696430 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:30:10.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.172:22-139.178.89.65:38018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:10.697868 systemd-logind[1790]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:30:10.699077 systemd-logind[1790]: Removed session 24. Jun 25 16:30:14.646000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:14.646000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:14.646000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001514ae0 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:30:14.646000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:14.646000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000c497c0 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:30:14.646000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:15.725926 systemd[1]: Started sshd@24-172.31.18.172:22-139.178.89.65:38020.service - OpenSSH per-connection server daemon (139.178.89.65:38020). Jun 25 16:30:15.729649 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:30:15.729737 kernel: audit: type=1130 audit(1719333015.726:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.172:22-139.178.89.65:38020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:15.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.172:22-139.178.89.65:38020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:15.904296 kernel: audit: type=1101 audit(1719333015.901:789): pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.901000 audit[5813]: USER_ACCT pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.904672 sshd[5813]: Accepted publickey for core from 139.178.89.65 port 38020 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:30:15.912000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.922970 kernel: audit: type=1103 audit(1719333015.912:790): pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.923088 kernel: audit: type=1006 audit(1719333015.913:791): pid=5813 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:30:15.923126 kernel: audit: type=1300 audit(1719333015.913:791): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddef07e70 a2=3 a3=7fb84fc5c480 items=0 ppid=1 pid=5813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:15.913000 audit[5813]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddef07e70 a2=3 a3=7fb84fc5c480 items=0 ppid=1 pid=5813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:15.918085 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:15.913000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:15.927263 kernel: audit: type=1327 audit(1719333015.913:791): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:15.943151 systemd-logind[1790]: New session 25 of user core. Jun 25 16:30:15.948484 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:30:15.955000 audit[5813]: USER_START pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.959230 kernel: audit: type=1105 audit(1719333015.955:792): pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.959000 audit[5815]: CRED_ACQ pid=5815 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.963249 kernel: audit: type=1103 audit(1719333015.959:793): pid=5815 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.377154 sshd[5813]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:16.380000 audit[5813]: USER_END pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.381000 audit[5813]: CRED_DISP pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.387361 kernel: audit: type=1106 audit(1719333016.380:794): pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.387471 kernel: audit: type=1104 audit(1719333016.381:795): pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.386340 systemd[1]: sshd@24-172.31.18.172:22-139.178.89.65:38020.service: Deactivated successfully. Jun 25 16:30:16.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.172:22-139.178.89.65:38020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:16.388065 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:30:16.389425 systemd-logind[1790]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:30:16.390748 systemd-logind[1790]: Removed session 25. Jun 25 16:30:16.408000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:16.408000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c01502ae70 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:30:16.408000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:16.409000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:16.409000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c0155dee40 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:30:16.409000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:16.413000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:16.413000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c01502af90 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:30:16.413000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:16.416000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:16.416000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c00fbf1aa0 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:30:16.416000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:16.421000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:16.421000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c0155df260 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:30:16.421000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:16.422000 audit[2816]: AVC avc: denied { watch } for pid=2816 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c565,c622 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:16.422000 audit[2816]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c00fc91880 a2=fc6 a3=0 items=0 ppid=2644 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c565,c622 key=(null) Jun 25 16:30:16.422000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E313732002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:20.051000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:20.051000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000c49ea0 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:30:20.051000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:20.054000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:20.054000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000c49ec0 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:30:20.054000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:20.065000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:20.065000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001210080 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:30:20.065000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:20.066000 audit[2803]: AVC avc: denied { watch } for pid=2803 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c383,c759 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:20.066000 audit[2803]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00125cc40 a2=fc6 a3=0 items=0 ppid=2632 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c383,c759 key=(null) Jun 25 16:30:20.066000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:20.362933 systemd[1]: run-containerd-runc-k8s.io-4e19be48daa643b128158c3525d15503df222db4c04a1d9e678b013ede6dede7-runc.DxwMzt.mount: Deactivated successfully. Jun 25 16:30:21.414982 systemd[1]: Started sshd@25-172.31.18.172:22-139.178.89.65:54566.service - OpenSSH per-connection server daemon (139.178.89.65:54566). Jun 25 16:30:21.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.18.172:22-139.178.89.65:54566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:21.419395 kernel: kauditd_printk_skb: 31 callbacks suppressed Jun 25 16:30:21.419514 kernel: audit: type=1130 audit(1719333021.415:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.18.172:22-139.178.89.65:54566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:21.594000 audit[5854]: USER_ACCT pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.594000 audit[5854]: CRED_ACQ pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.596561 sshd[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:21.601844 sshd[5854]: Accepted publickey for core from 139.178.89.65 port 54566 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:30:21.604817 kernel: audit: type=1101 audit(1719333021.594:808): pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.606897 kernel: audit: type=1103 audit(1719333021.594:809): pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.606973 kernel: audit: type=1006 audit(1719333021.594:810): pid=5854 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 16:30:21.612363 kernel: audit: type=1300 audit(1719333021.594:810): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9bb7db30 a2=3 a3=7fa7afcb1480 items=0 ppid=1 pid=5854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:21.594000 audit[5854]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9bb7db30 a2=3 a3=7fa7afcb1480 items=0 ppid=1 pid=5854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:21.594000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:21.619252 kernel: audit: type=1327 audit(1719333021.594:810): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:21.632783 systemd-logind[1790]: New session 26 of user core. Jun 25 16:30:21.638457 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:30:21.649000 audit[5854]: USER_START pid=5854 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.659093 kernel: audit: type=1105 audit(1719333021.649:811): pid=5854 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.659318 kernel: audit: type=1103 audit(1719333021.649:812): pid=5856 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.649000 audit[5856]: CRED_ACQ pid=5856 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.920945 sshd[5854]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:21.922000 audit[5854]: USER_END pid=5854 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.922000 audit[5854]: CRED_DISP pid=5854 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.927760 systemd-logind[1790]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:30:21.929140 kernel: audit: type=1106 audit(1719333021.922:813): pid=5854 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.929232 kernel: audit: type=1104 audit(1719333021.922:814): pid=5854 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.929938 systemd[1]: sshd@25-172.31.18.172:22-139.178.89.65:54566.service: Deactivated successfully. Jun 25 16:30:21.930951 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:30:21.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.18.172:22-139.178.89.65:54566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:21.932113 systemd-logind[1790]: Removed session 26. Jun 25 16:30:24.179032 systemd[1]: run-containerd-runc-k8s.io-08df93cd77925cb465677453ef00510b358c84d5dd2495b12981a4ac0103c692-runc.Rlp8Ij.mount: Deactivated successfully.