Jun 25 16:22:39.930055 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:22:39.930088 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:22:39.930103 kernel: BIOS-provided physical RAM map: Jun 25 16:22:39.930115 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 16:22:39.930125 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 16:22:39.930135 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 16:22:39.930149 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jun 25 16:22:39.930159 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jun 25 16:22:39.930168 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jun 25 16:22:39.930178 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 16:22:39.930188 kernel: NX (Execute Disable) protection: active Jun 25 16:22:39.930199 kernel: SMBIOS 2.7 present. Jun 25 16:22:39.930210 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jun 25 16:22:39.930221 kernel: Hypervisor detected: KVM Jun 25 16:22:39.930238 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:22:39.930248 kernel: kvm-clock: using sched offset of 7604463435 cycles Jun 25 16:22:39.930262 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:22:39.930274 kernel: tsc: Detected 2499.998 MHz processor Jun 25 16:22:39.930284 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:22:39.930295 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:22:39.930307 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jun 25 16:22:39.930322 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:22:39.930334 kernel: Using GB pages for direct mapping Jun 25 16:22:39.930348 kernel: ACPI: Early table checksum verification disabled Jun 25 16:22:39.930361 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jun 25 16:22:39.930372 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jun 25 16:22:39.930383 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 25 16:22:39.930395 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jun 25 16:22:39.930407 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jun 25 16:22:39.930421 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 16:22:39.930434 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 25 16:22:39.930445 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jun 25 16:22:39.930462 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 25 16:22:39.930476 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jun 25 16:22:39.930489 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jun 25 16:22:39.930500 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 16:22:39.930511 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jun 25 16:22:39.930523 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jun 25 16:22:39.930538 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jun 25 16:22:39.930552 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jun 25 16:22:39.930570 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jun 25 16:22:39.930584 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jun 25 16:22:39.930598 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jun 25 16:22:39.930613 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jun 25 16:22:39.930726 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jun 25 16:22:39.930742 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jun 25 16:22:39.930755 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:22:39.930767 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 16:22:39.930780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jun 25 16:22:39.930793 kernel: NUMA: Initialized distance table, cnt=1 Jun 25 16:22:39.930806 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jun 25 16:22:39.930820 kernel: Zone ranges: Jun 25 16:22:39.930833 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:22:39.930850 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jun 25 16:22:39.930862 kernel: Normal empty Jun 25 16:22:39.930875 kernel: Movable zone start for each node Jun 25 16:22:39.930887 kernel: Early memory node ranges Jun 25 16:22:39.930900 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 16:22:39.930923 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jun 25 16:22:39.930936 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jun 25 16:22:39.930951 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:22:39.930966 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 16:22:39.930985 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jun 25 16:22:39.930999 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 16:22:39.931014 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:22:39.931029 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jun 25 16:22:39.931043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:22:39.931057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:22:39.931070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:22:39.931083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:22:39.931098 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:22:39.931115 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 16:22:39.931130 kernel: TSC deadline timer available Jun 25 16:22:39.931144 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 16:22:39.931159 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jun 25 16:22:39.931173 kernel: Booting paravirtualized kernel on KVM Jun 25 16:22:39.931188 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:22:39.931202 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 16:22:39.931217 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jun 25 16:22:39.931232 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jun 25 16:22:39.931249 kernel: pcpu-alloc: [0] 0 1 Jun 25 16:22:39.931263 kernel: kvm-guest: PV spinlocks enabled Jun 25 16:22:39.931277 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:22:39.931292 kernel: Fallback order for Node 0: 0 Jun 25 16:22:39.931307 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jun 25 16:22:39.931321 kernel: Policy zone: DMA32 Jun 25 16:22:39.931338 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:22:39.931353 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:22:39.931370 kernel: random: crng init done Jun 25 16:22:39.931384 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:22:39.931398 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:22:39.931413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:22:39.931428 kernel: Memory: 1928268K/2057760K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 129232K reserved, 0K cma-reserved) Jun 25 16:22:39.931443 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 16:22:39.931458 kernel: Kernel/User page tables isolation: enabled Jun 25 16:22:39.931472 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:22:39.931487 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:22:39.931505 kernel: Dynamic Preempt: voluntary Jun 25 16:22:39.931519 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:22:39.931535 kernel: rcu: RCU event tracing is enabled. Jun 25 16:22:39.931550 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 16:22:39.931564 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:22:39.931579 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:22:39.931593 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:22:39.931608 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:22:39.931645 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 16:22:39.931664 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 16:22:39.931678 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:22:39.931691 kernel: Console: colour VGA+ 80x25 Jun 25 16:22:39.931706 kernel: printk: console [ttyS0] enabled Jun 25 16:22:39.931720 kernel: ACPI: Core revision 20220331 Jun 25 16:22:39.931734 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jun 25 16:22:39.931749 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:22:39.931763 kernel: x2apic enabled Jun 25 16:22:39.931777 kernel: Switched APIC routing to physical x2apic. Jun 25 16:22:39.931792 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jun 25 16:22:39.931810 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jun 25 16:22:39.931825 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 16:22:39.931852 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 16:22:39.931869 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:22:39.931884 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:22:39.931900 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:22:39.931915 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:22:39.931930 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 16:22:39.931945 kernel: RETBleed: Vulnerable Jun 25 16:22:39.931960 kernel: Speculative Store Bypass: Vulnerable Jun 25 16:22:39.931975 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:22:39.931990 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:22:39.932005 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 16:22:39.932023 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:22:39.932038 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:22:39.932053 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:22:39.932068 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jun 25 16:22:39.932083 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jun 25 16:22:39.932102 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 16:22:39.932116 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 16:22:39.932131 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 16:22:39.932147 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 25 16:22:39.932162 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:22:39.932176 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jun 25 16:22:39.932191 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jun 25 16:22:39.932206 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jun 25 16:22:39.932220 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jun 25 16:22:39.932235 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jun 25 16:22:39.932251 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jun 25 16:22:39.932266 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jun 25 16:22:39.932285 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:22:39.932299 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:22:39.932314 kernel: LSM: Security Framework initializing Jun 25 16:22:39.932329 kernel: SELinux: Initializing. Jun 25 16:22:39.932345 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:22:39.932360 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:22:39.932375 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 16:22:39.932391 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:22:39.932406 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:22:39.932422 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:22:39.932437 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:22:39.932456 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:22:39.932471 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:22:39.932487 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 16:22:39.932502 kernel: signal: max sigframe size: 3632 Jun 25 16:22:39.932518 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:22:39.932534 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:22:39.932550 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:22:39.932564 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:22:39.932578 kernel: x86: Booting SMP configuration: Jun 25 16:22:39.932596 kernel: .... node #0, CPUs: #1 Jun 25 16:22:39.932611 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 25 16:22:39.932650 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 16:22:39.932662 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:22:39.932674 kernel: smpboot: Max logical packages: 1 Jun 25 16:22:39.932687 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jun 25 16:22:39.932701 kernel: devtmpfs: initialized Jun 25 16:22:39.932715 kernel: x86/mm: Memory block size: 128MB Jun 25 16:22:39.932729 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:22:39.932746 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 16:22:39.932915 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:22:39.932931 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:22:39.932946 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:22:39.932961 kernel: audit: type=2000 audit(1719332559.295:1): state=initialized audit_enabled=0 res=1 Jun 25 16:22:39.932974 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:22:39.932989 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:22:39.933004 kernel: cpuidle: using governor menu Jun 25 16:22:39.933018 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:22:39.933038 kernel: dca service started, version 1.12.1 Jun 25 16:22:39.933140 kernel: PCI: Using configuration type 1 for base access Jun 25 16:22:39.933163 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:22:39.933181 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:22:39.933196 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:22:39.933212 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:22:39.933228 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:22:39.933244 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:22:39.933260 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:22:39.933280 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:22:39.933297 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:22:39.933313 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 25 16:22:39.933329 kernel: ACPI: Interpreter enabled Jun 25 16:22:39.933346 kernel: ACPI: PM: (supports S0 S5) Jun 25 16:22:39.933361 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:22:39.933377 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:22:39.933393 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:22:39.933409 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 25 16:22:39.933425 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:22:39.933649 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:22:39.933789 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 16:22:39.933914 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jun 25 16:22:39.933933 kernel: acpiphp: Slot [3] registered Jun 25 16:22:39.933948 kernel: acpiphp: Slot [4] registered Jun 25 16:22:39.933962 kernel: acpiphp: Slot [5] registered Jun 25 16:22:39.933980 kernel: acpiphp: Slot [6] registered Jun 25 16:22:39.933996 kernel: acpiphp: Slot [7] registered Jun 25 16:22:39.934010 kernel: acpiphp: Slot [8] registered Jun 25 16:22:39.934024 kernel: acpiphp: Slot [9] registered Jun 25 16:22:39.934038 kernel: acpiphp: Slot [10] registered Jun 25 16:22:39.934053 kernel: acpiphp: Slot [11] registered Jun 25 16:22:39.934067 kernel: acpiphp: Slot [12] registered Jun 25 16:22:39.934081 kernel: acpiphp: Slot [13] registered Jun 25 16:22:39.934096 kernel: acpiphp: Slot [14] registered Jun 25 16:22:39.934110 kernel: acpiphp: Slot [15] registered Jun 25 16:22:39.934127 kernel: acpiphp: Slot [16] registered Jun 25 16:22:39.934142 kernel: acpiphp: Slot [17] registered Jun 25 16:22:39.934158 kernel: acpiphp: Slot [18] registered Jun 25 16:22:39.934173 kernel: acpiphp: Slot [19] registered Jun 25 16:22:39.934186 kernel: acpiphp: Slot [20] registered Jun 25 16:22:39.934200 kernel: acpiphp: Slot [21] registered Jun 25 16:22:39.934215 kernel: acpiphp: Slot [22] registered Jun 25 16:22:39.934230 kernel: acpiphp: Slot [23] registered Jun 25 16:22:39.934246 kernel: acpiphp: Slot [24] registered Jun 25 16:22:39.934263 kernel: acpiphp: Slot [25] registered Jun 25 16:22:39.934276 kernel: acpiphp: Slot [26] registered Jun 25 16:22:39.934291 kernel: acpiphp: Slot [27] registered Jun 25 16:22:39.934305 kernel: acpiphp: Slot [28] registered Jun 25 16:22:39.934320 kernel: acpiphp: Slot [29] registered Jun 25 16:22:39.934335 kernel: acpiphp: Slot [30] registered Jun 25 16:22:39.934350 kernel: acpiphp: Slot [31] registered Jun 25 16:22:39.934366 kernel: PCI host bridge to bus 0000:00 Jun 25 16:22:39.934509 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:22:39.934678 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:22:39.934791 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:22:39.934907 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 16:22:39.935031 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:22:39.935165 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:22:39.935297 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:22:39.935428 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jun 25 16:22:39.935556 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 16:22:39.935694 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 16:22:39.935816 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jun 25 16:22:39.935943 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jun 25 16:22:39.936153 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jun 25 16:22:39.936293 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jun 25 16:22:39.936427 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jun 25 16:22:39.936555 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jun 25 16:22:39.936708 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 11718 usecs Jun 25 16:22:39.936849 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jun 25 16:22:39.936981 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jun 25 16:22:39.937111 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 25 16:22:39.937252 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:22:39.937407 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 25 16:22:39.937529 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jun 25 16:22:39.937685 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 25 16:22:39.937812 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jun 25 16:22:39.937831 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:22:39.937845 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:22:39.937860 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:22:39.937875 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:22:39.937894 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:22:39.937909 kernel: iommu: Default domain type: Translated Jun 25 16:22:39.937924 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:22:39.937939 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:22:39.937954 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:22:39.937970 kernel: PTP clock support registered Jun 25 16:22:39.937985 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:22:39.938171 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:22:39.938189 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 16:22:39.938208 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jun 25 16:22:39.938360 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jun 25 16:22:39.938491 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jun 25 16:22:39.938634 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:22:39.938654 kernel: vgaarb: loaded Jun 25 16:22:39.938670 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jun 25 16:22:39.938686 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jun 25 16:22:39.938701 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:22:39.938720 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:22:39.938735 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:22:39.938750 kernel: pnp: PnP ACPI init Jun 25 16:22:39.938765 kernel: pnp: PnP ACPI: found 5 devices Jun 25 16:22:39.938781 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:22:39.938796 kernel: NET: Registered PF_INET protocol family Jun 25 16:22:39.938811 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:22:39.938827 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 16:22:39.938842 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:22:39.938860 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:22:39.938875 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 16:22:39.938890 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 16:22:39.938904 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:22:39.938928 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:22:39.938942 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:22:39.938958 kernel: NET: Registered PF_XDP protocol family Jun 25 16:22:39.939079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:22:39.939197 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:22:39.939313 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:22:39.939424 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 16:22:39.939555 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:22:39.939575 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:22:39.939590 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:22:39.939606 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jun 25 16:22:39.939635 kernel: clocksource: Switched to clocksource tsc Jun 25 16:22:39.939654 kernel: Initialise system trusted keyrings Jun 25 16:22:39.939670 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 16:22:39.939684 kernel: Key type asymmetric registered Jun 25 16:22:39.939699 kernel: Asymmetric key parser 'x509' registered Jun 25 16:22:39.939713 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:22:39.939729 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:22:39.939744 kernel: io scheduler mq-deadline registered Jun 25 16:22:39.939759 kernel: io scheduler kyber registered Jun 25 16:22:39.939774 kernel: io scheduler bfq registered Jun 25 16:22:39.939792 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:22:39.939808 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:22:39.939823 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:22:39.939838 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:22:39.939854 kernel: i8042: Warning: Keylock active Jun 25 16:22:39.939869 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:22:39.939884 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:22:39.940023 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 25 16:22:39.940143 kernel: rtc_cmos 00:00: registered as rtc0 Jun 25 16:22:39.940263 kernel: rtc_cmos 00:00: setting system clock to 2024-06-25T16:22:39 UTC (1719332559) Jun 25 16:22:39.940378 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 25 16:22:39.940397 kernel: intel_pstate: CPU model not supported Jun 25 16:22:39.940412 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:22:39.940427 kernel: Segment Routing with IPv6 Jun 25 16:22:39.940442 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:22:39.940457 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:22:39.940473 kernel: Key type dns_resolver registered Jun 25 16:22:39.940490 kernel: IPI shorthand broadcast: enabled Jun 25 16:22:39.940505 kernel: sched_clock: Marking stable (635803364, 280999550)->(1051467869, -134664955) Jun 25 16:22:39.940520 kernel: registered taskstats version 1 Jun 25 16:22:39.940535 kernel: Loading compiled-in X.509 certificates Jun 25 16:22:39.940550 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:22:39.940565 kernel: Key type .fscrypt registered Jun 25 16:22:39.940580 kernel: Key type fscrypt-provisioning registered Jun 25 16:22:39.940595 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:22:39.940610 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:22:39.940641 kernel: ima: No architecture policies found Jun 25 16:22:39.940656 kernel: clk: Disabling unused clocks Jun 25 16:22:39.940671 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:22:39.940686 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:22:39.940701 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:22:39.940716 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:22:39.940731 kernel: Run /init as init process Jun 25 16:22:39.940745 kernel: with arguments: Jun 25 16:22:39.940761 kernel: /init Jun 25 16:22:39.940778 kernel: with environment: Jun 25 16:22:39.940814 kernel: HOME=/ Jun 25 16:22:39.940832 kernel: TERM=linux Jun 25 16:22:39.940847 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:22:39.940866 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:22:39.940886 systemd[1]: Detected virtualization amazon. Jun 25 16:22:39.940902 systemd[1]: Detected architecture x86-64. Jun 25 16:22:39.940921 systemd[1]: Running in initrd. Jun 25 16:22:39.940937 systemd[1]: No hostname configured, using default hostname. Jun 25 16:22:39.940953 systemd[1]: Hostname set to . Jun 25 16:22:39.941025 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:22:39.941042 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:22:39.941058 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:22:39.941075 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:22:39.941091 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:22:39.941111 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:22:39.941127 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:22:39.941143 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:22:39.941160 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:22:39.941177 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:22:39.941193 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:22:39.941210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:22:39.941229 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:22:39.941246 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:22:39.941263 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:22:39.941280 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:22:39.941297 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:22:39.941313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:22:39.941329 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:22:39.941346 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:22:39.941362 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:22:39.941381 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:22:39.941399 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:22:39.941415 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:22:39.941432 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:22:39.941452 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:22:39.941477 systemd-journald[180]: Journal started Jun 25 16:22:39.941548 systemd-journald[180]: Runtime Journal (/run/log/journal/ec2cc1a654d50049a0d96e89888c7e33) is 4.8M, max 38.6M, 33.8M free. Jun 25 16:22:39.950658 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:22:39.956183 systemd-modules-load[181]: Inserted module 'overlay' Jun 25 16:22:40.074311 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:22:40.074338 kernel: Bridge firewalling registered Jun 25 16:22:40.074351 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:22:40.074362 kernel: SCSI subsystem initialized Jun 25 16:22:40.074501 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:22:40.074520 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:22:40.074532 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:22:39.995531 systemd-modules-load[181]: Inserted module 'br_netfilter' Jun 25 16:22:40.077445 kernel: audit: type=1130 audit(1719332560.073:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.043422 systemd-modules-load[181]: Inserted module 'dm_multipath' Jun 25 16:22:40.081666 kernel: audit: type=1130 audit(1719332560.077:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.075042 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:22:40.081921 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:22:40.086984 kernel: audit: type=1130 audit(1719332560.083:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.087187 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:22:40.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.093658 kernel: audit: type=1130 audit(1719332560.089:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.100098 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:22:40.104384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:22:40.108665 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:22:40.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.134412 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:22:40.146601 kernel: audit: type=1130 audit(1719332560.136:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.147019 kernel: audit: type=1334 audit(1719332560.137:7): prog-id=6 op=LOAD Jun 25 16:22:40.137000 audit: BPF prog-id=6 op=LOAD Jun 25 16:22:40.148074 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:22:40.152879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:22:40.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.165653 kernel: audit: type=1130 audit(1719332560.155:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.164204 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:22:40.175653 kernel: audit: type=1130 audit(1719332560.165:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.171382 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:22:40.194510 dracut-cmdline[208]: dracut-dracut-053 Jun 25 16:22:40.198650 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:22:40.240514 systemd-resolved[203]: Positive Trust Anchors: Jun 25 16:22:40.240909 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:22:40.240963 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:22:40.247696 systemd-resolved[203]: Defaulting to hostname 'linux'. Jun 25 16:22:40.250397 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:22:40.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.259771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:22:40.264675 kernel: audit: type=1130 audit(1719332560.259:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.343655 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:22:40.359653 kernel: iscsi: registered transport (tcp) Jun 25 16:22:40.388658 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:22:40.388727 kernel: QLogic iSCSI HBA Driver Jun 25 16:22:40.438432 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:22:40.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.447868 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:22:40.525752 kernel: raid6: avx512x4 gen() 12440 MB/s Jun 25 16:22:40.542675 kernel: raid6: avx512x2 gen() 14187 MB/s Jun 25 16:22:40.559678 kernel: raid6: avx512x1 gen() 14270 MB/s Jun 25 16:22:40.576669 kernel: raid6: avx2x4 gen() 15452 MB/s Jun 25 16:22:40.593676 kernel: raid6: avx2x2 gen() 11941 MB/s Jun 25 16:22:40.610783 kernel: raid6: avx2x1 gen() 10681 MB/s Jun 25 16:22:40.610854 kernel: raid6: using algorithm avx2x4 gen() 15452 MB/s Jun 25 16:22:40.629034 kernel: raid6: .... xor() 4792 MB/s, rmw enabled Jun 25 16:22:40.629125 kernel: raid6: using avx512x2 recovery algorithm Jun 25 16:22:40.633690 kernel: xor: automatically using best checksumming function avx Jun 25 16:22:40.851183 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:22:40.866290 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:22:40.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.867000 audit: BPF prog-id=7 op=LOAD Jun 25 16:22:40.867000 audit: BPF prog-id=8 op=LOAD Jun 25 16:22:40.873983 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:22:40.912114 systemd-udevd[384]: Using default interface naming scheme 'v252'. Jun 25 16:22:40.924202 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:22:40.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:40.955111 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:22:40.990531 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Jun 25 16:22:41.062764 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:22:41.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:41.070843 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:22:41.161928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:22:41.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:41.264652 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:22:41.271301 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 25 16:22:41.317842 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 25 16:22:41.318026 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jun 25 16:22:41.318398 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:22:41.318433 kernel: AES CTR mode by8 optimization enabled Jun 25 16:22:41.318657 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:47:46:ae:20:8d Jun 25 16:22:41.323497 (udev-worker)[435]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:22:41.374145 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 25 16:22:41.374516 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 16:22:41.382648 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 25 16:22:41.385658 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:22:41.386652 kernel: GPT:9289727 != 16777215 Jun 25 16:22:41.386723 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:22:41.386737 kernel: GPT:9289727 != 16777215 Jun 25 16:22:41.386748 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:22:41.386796 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:22:41.497650 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (441) Jun 25 16:22:41.542786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 16:22:41.561645 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (435) Jun 25 16:22:41.569209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 25 16:22:41.606853 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 25 16:22:41.630471 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 25 16:22:41.630591 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 25 16:22:41.641892 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:22:41.651987 disk-uuid[603]: Primary Header is updated. Jun 25 16:22:41.651987 disk-uuid[603]: Secondary Entries is updated. Jun 25 16:22:41.651987 disk-uuid[603]: Secondary Header is updated. Jun 25 16:22:41.657652 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:22:41.666674 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:22:41.672650 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:22:42.670675 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:22:42.671898 disk-uuid[604]: The operation has completed successfully. Jun 25 16:22:42.855226 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:22:42.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:42.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:42.855444 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:22:42.872954 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:22:42.879995 sh[944]: Success Jun 25 16:22:42.906863 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:22:43.017509 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:22:43.023638 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:22:43.035877 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:22:43.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.061094 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:22:43.061155 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:22:43.061173 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:22:43.061190 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:22:43.066906 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:22:43.194674 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 16:22:43.230363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:22:43.232248 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:22:43.241947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:22:43.249089 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:22:43.265159 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:43.265224 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:22:43.265243 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 16:22:43.280655 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 16:22:43.293282 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:22:43.295110 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:43.306505 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:22:43.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.310911 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:22:43.361789 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:22:43.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.364000 audit: BPF prog-id=9 op=LOAD Jun 25 16:22:43.371346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:22:43.404445 systemd-networkd[1134]: lo: Link UP Jun 25 16:22:43.404458 systemd-networkd[1134]: lo: Gained carrier Jun 25 16:22:43.406584 systemd-networkd[1134]: Enumeration completed Jun 25 16:22:43.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.406883 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:22:43.408716 systemd[1]: Reached target network.target - Network. Jun 25 16:22:43.432255 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:22:43.436086 systemd-networkd[1134]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:22:43.436183 systemd-networkd[1134]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:22:43.441701 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:22:43.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.446740 systemd-networkd[1134]: eth0: Link UP Jun 25 16:22:43.446750 systemd-networkd[1134]: eth0: Gained carrier Jun 25 16:22:43.446761 systemd-networkd[1134]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:22:43.448938 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:22:43.456198 iscsid[1139]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:22:43.456198 iscsid[1139]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:22:43.456198 iscsid[1139]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:22:43.456198 iscsid[1139]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:22:43.456198 iscsid[1139]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:22:43.456198 iscsid[1139]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:22:43.457935 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:22:43.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.478738 systemd-networkd[1134]: eth0: DHCPv4 address 172.31.29.32/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 16:22:43.482904 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:22:43.527806 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:22:43.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.528061 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:22:43.533806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:22:43.536243 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:22:43.542945 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:22:43.570567 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:22:43.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.709664 ignition[1074]: Ignition 2.15.0 Jun 25 16:22:43.709683 ignition[1074]: Stage: fetch-offline Jun 25 16:22:43.709940 ignition[1074]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:43.709953 ignition[1074]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:22:43.711797 ignition[1074]: Ignition finished successfully Jun 25 16:22:43.715446 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:22:43.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.720825 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 16:22:43.739445 ignition[1158]: Ignition 2.15.0 Jun 25 16:22:43.739459 ignition[1158]: Stage: fetch Jun 25 16:22:43.739879 ignition[1158]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:43.739894 ignition[1158]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:22:43.740379 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:22:43.751559 ignition[1158]: PUT result: OK Jun 25 16:22:43.758548 ignition[1158]: parsed url from cmdline: "" Jun 25 16:22:43.758562 ignition[1158]: no config URL provided Jun 25 16:22:43.758574 ignition[1158]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:22:43.758932 ignition[1158]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:22:43.758967 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:22:43.760172 ignition[1158]: PUT result: OK Jun 25 16:22:43.760233 ignition[1158]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 25 16:22:43.762341 ignition[1158]: GET result: OK Jun 25 16:22:43.762407 ignition[1158]: parsing config with SHA512: 2a8295f2f5cff493aef649f18fabb267e679258ac392ff3223bce191fae50d4c3758eb327a48c390e68f9836a8ca83f7ee7fdecea59a71ddb74de7c82cda8b67 Jun 25 16:22:43.770567 unknown[1158]: fetched base config from "system" Jun 25 16:22:43.770583 unknown[1158]: fetched base config from "system" Jun 25 16:22:43.771216 ignition[1158]: fetch: fetch complete Jun 25 16:22:43.770590 unknown[1158]: fetched user config from "aws" Jun 25 16:22:43.771224 ignition[1158]: fetch: fetch passed Jun 25 16:22:43.774051 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 16:22:43.771281 ignition[1158]: Ignition finished successfully Jun 25 16:22:43.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.786018 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:22:43.814938 ignition[1164]: Ignition 2.15.0 Jun 25 16:22:43.815248 ignition[1164]: Stage: kargs Jun 25 16:22:43.815900 ignition[1164]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:43.816041 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:22:43.817333 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:22:43.826159 ignition[1164]: PUT result: OK Jun 25 16:22:43.831256 ignition[1164]: kargs: kargs passed Jun 25 16:22:43.832044 ignition[1164]: Ignition finished successfully Jun 25 16:22:43.835998 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:22:43.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.845075 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:22:43.870682 ignition[1170]: Ignition 2.15.0 Jun 25 16:22:43.871068 ignition[1170]: Stage: disks Jun 25 16:22:43.871433 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:43.871454 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:22:43.871567 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:22:43.874303 ignition[1170]: PUT result: OK Jun 25 16:22:43.879192 ignition[1170]: disks: disks passed Jun 25 16:22:43.879419 ignition[1170]: Ignition finished successfully Jun 25 16:22:43.882213 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:22:43.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.883567 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:22:43.884764 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:22:43.888165 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:22:43.890401 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:22:43.892736 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:22:43.907896 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:22:43.948024 systemd-fsck[1178]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 16:22:43.953795 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:22:43.969817 kernel: kauditd_printk_skb: 22 callbacks suppressed Jun 25 16:22:43.969860 kernel: audit: type=1130 audit(1719332563.954:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:43.971195 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:22:44.107865 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:22:44.108708 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:22:44.108963 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:22:44.137846 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:22:44.155208 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:22:44.159182 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:22:44.159299 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:22:44.168728 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1195) Jun 25 16:22:44.168753 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:44.168765 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:22:44.168776 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 16:22:44.159338 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:22:44.174650 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 16:22:44.176222 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:22:44.176375 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:22:44.199414 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:22:44.577769 initrd-setup-root[1219]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:22:44.606070 initrd-setup-root[1226]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:22:44.612125 initrd-setup-root[1233]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:22:44.629565 initrd-setup-root[1240]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:22:44.839831 systemd-networkd[1134]: eth0: Gained IPv6LL Jun 25 16:22:45.012509 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:22:45.025556 kernel: audit: type=1130 audit(1719332565.014:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:45.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:45.026886 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:22:45.031389 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:22:45.043507 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:22:45.046651 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:45.078171 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:22:45.083199 kernel: audit: type=1130 audit(1719332565.079:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:45.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:45.083272 ignition[1307]: INFO : Ignition 2.15.0 Jun 25 16:22:45.083272 ignition[1307]: INFO : Stage: mount Jun 25 16:22:45.083272 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:45.083272 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:22:45.083272 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:22:45.088652 ignition[1307]: INFO : PUT result: OK Jun 25 16:22:45.096747 ignition[1307]: INFO : mount: mount passed Jun 25 16:22:45.098271 ignition[1307]: INFO : Ignition finished successfully Jun 25 16:22:45.100468 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:22:45.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:45.109646 kernel: audit: type=1130 audit(1719332565.099:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:45.110854 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:22:45.130319 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:22:45.146665 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1316) Jun 25 16:22:45.149049 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:45.149106 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:22:45.149136 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 16:22:45.152644 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 16:22:45.155606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:22:45.174909 ignition[1334]: INFO : Ignition 2.15.0 Jun 25 16:22:45.174909 ignition[1334]: INFO : Stage: files Jun 25 16:22:45.177236 ignition[1334]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:45.177236 ignition[1334]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:22:45.177236 ignition[1334]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:22:45.180924 ignition[1334]: INFO : PUT result: OK Jun 25 16:22:45.183778 ignition[1334]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:22:45.186341 ignition[1334]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:22:45.187893 ignition[1334]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:22:45.216282 ignition[1334]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:22:45.218161 ignition[1334]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:22:45.218161 ignition[1334]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:22:45.217820 unknown[1334]: wrote ssh authorized keys file for user: core Jun 25 16:22:45.222811 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:22:45.222811 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:22:45.285037 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:22:45.364478 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:22:45.364478 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:22:45.368555 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:22:45.862566 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:22:46.286145 ignition[1334]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:22:46.286145 ignition[1334]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:22:46.297976 ignition[1334]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:22:46.302269 ignition[1334]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:22:46.302269 ignition[1334]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:22:46.302269 ignition[1334]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:22:46.302269 ignition[1334]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:22:46.302269 ignition[1334]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:22:46.302269 ignition[1334]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:22:46.302269 ignition[1334]: INFO : files: files passed Jun 25 16:22:46.302269 ignition[1334]: INFO : Ignition finished successfully Jun 25 16:22:46.319995 kernel: audit: type=1130 audit(1719332566.311:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.307540 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:22:46.322038 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:22:46.326236 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:22:46.326875 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:22:46.326998 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:22:46.335634 kernel: audit: type=1130 audit(1719332566.330:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.335671 kernel: audit: type=1131 audit(1719332566.330:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.344857 initrd-setup-root-after-ignition[1360]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:22:46.344857 initrd-setup-root-after-ignition[1360]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:22:46.348766 initrd-setup-root-after-ignition[1364]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:22:46.350892 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:22:46.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.353910 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:22:46.358202 kernel: audit: type=1130 audit(1719332566.353:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.363070 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:22:46.388170 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:22:46.388415 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:22:46.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.392855 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:22:46.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.396634 kernel: audit: type=1130 audit(1719332566.392:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.396658 kernel: audit: type=1131 audit(1719332566.392:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.398269 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:22:46.399743 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:22:46.414106 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:22:46.426485 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:22:46.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.435959 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:22:46.455339 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:22:46.460853 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:22:46.461126 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:22:46.467356 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:22:46.467523 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:22:46.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.480585 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:22:46.483108 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:22:46.485164 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:22:46.488850 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:22:46.491158 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:22:46.494941 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:22:46.497721 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:22:46.499290 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:22:46.504543 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:22:46.507125 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:22:46.507273 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:22:46.512007 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:22:46.512542 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:22:46.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.522584 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:22:46.525069 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:22:46.525318 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:22:46.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.534142 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:22:46.534363 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:22:46.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.540407 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:22:46.540586 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:22:46.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.560249 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:22:46.570575 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:22:46.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.573196 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:22:46.573905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:22:46.593919 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:22:46.599831 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:22:46.601985 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:22:46.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.607179 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:22:46.608654 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:22:46.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.618038 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:22:46.625520 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:22:46.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.635888 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:22:46.636020 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:22:46.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.645340 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:22:46.646448 ignition[1378]: INFO : Ignition 2.15.0 Jun 25 16:22:46.646448 ignition[1378]: INFO : Stage: umount Jun 25 16:22:46.648723 ignition[1378]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:46.648723 ignition[1378]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:22:46.648723 ignition[1378]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:22:46.652597 ignition[1378]: INFO : PUT result: OK Jun 25 16:22:46.656671 ignition[1378]: INFO : umount: umount passed Jun 25 16:22:46.657604 ignition[1378]: INFO : Ignition finished successfully Jun 25 16:22:46.658606 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:22:46.658928 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:22:46.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.662403 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:22:46.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.662457 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:22:46.664583 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:22:46.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.664669 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:22:46.666667 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 16:22:46.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.666727 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 16:22:46.670743 systemd[1]: Stopped target network.target - Network. Jun 25 16:22:46.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.687037 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:22:46.687137 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:22:46.693060 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:22:46.700911 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:22:46.706256 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:22:46.716422 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:22:46.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.718797 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:22:46.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.722674 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:22:46.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.722726 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:22:46.725359 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:22:46.725513 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:22:46.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.726973 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:22:46.727023 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:22:46.729764 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:22:46.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.753000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:22:46.731079 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:22:46.732688 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:22:46.732770 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:22:46.734442 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:22:46.735702 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:22:46.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.737318 systemd-networkd[1134]: eth0: DHCPv6 lease lost Jun 25 16:22:46.764000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:22:46.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.738501 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:22:46.738664 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:22:46.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.748063 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:22:46.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.749765 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:22:46.754539 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:22:46.754577 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:22:46.761428 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:22:46.762693 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:22:46.762760 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:22:46.764194 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:22:46.764241 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:22:46.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.773028 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:22:46.773088 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:22:46.783222 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:22:46.783281 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:22:46.799266 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:22:46.806308 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:22:46.806400 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:22:46.807042 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:22:46.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.807413 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:22:46.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.815246 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:22:46.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.815314 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:22:46.816613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:22:46.816685 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:22:46.820721 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:22:46.820776 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:22:46.831599 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:22:46.831693 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:22:46.833663 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:22:46.833773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:22:46.852998 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:22:46.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.853096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:22:46.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.853170 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:22:46.858585 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:22:46.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.858716 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:22:46.860230 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:22:46.860333 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:22:46.865760 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:22:46.887924 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:22:46.899522 systemd[1]: Switching root. Jun 25 16:22:46.934472 systemd-journald[180]: Journal stopped Jun 25 16:22:48.938638 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Jun 25 16:22:48.938804 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:22:48.938829 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:22:48.938853 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:22:48.938876 kernel: SELinux: policy capability open_perms=1 Jun 25 16:22:48.938894 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:22:48.938911 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:22:48.938937 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:22:48.938955 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:22:48.938975 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:22:48.938993 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:22:48.939016 systemd[1]: Successfully loaded SELinux policy in 71.753ms. Jun 25 16:22:48.939042 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.560ms. Jun 25 16:22:48.939061 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:22:48.939425 systemd[1]: Detected virtualization amazon. Jun 25 16:22:48.939454 systemd[1]: Detected architecture x86-64. Jun 25 16:22:48.939474 systemd[1]: Detected first boot. Jun 25 16:22:48.939603 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:22:48.939639 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:22:48.939658 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:22:48.939678 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:22:48.939697 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:22:48.939724 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:22:48.939743 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:22:48.939764 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:22:48.939782 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:22:48.939805 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:22:48.939823 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:22:48.939842 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:22:48.939861 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:22:48.939882 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:22:48.939903 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:22:48.939923 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:22:48.939943 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:22:48.939961 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:22:48.939979 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:22:48.939999 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:22:48.940017 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:22:48.940035 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:22:48.940057 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:22:48.940075 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:22:48.940093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:22:48.940112 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:22:48.940129 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:22:48.940147 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:22:48.940166 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:22:48.940185 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:22:48.940206 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:22:48.940227 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:22:48.940247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:22:48.940267 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:22:48.940285 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:22:48.940303 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:22:48.940321 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:22:48.940338 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:48.940355 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:22:48.940437 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:22:48.940460 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:22:48.940480 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:22:48.940500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:22:48.940519 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:22:48.940538 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:22:48.940559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:22:48.940577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:22:48.940598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:22:48.940752 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:22:48.940779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:22:48.940803 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:22:48.940824 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:22:48.940846 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:22:48.940867 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:22:48.940889 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:22:48.940908 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:22:48.940933 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:22:48.940954 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:22:48.940975 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:22:48.940995 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:22:48.941016 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:22:48.941038 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:22:48.941058 systemd[1]: Stopped verity-setup.service. Jun 25 16:22:48.941081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:48.941100 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:22:48.941124 kernel: fuse: init (API version 7.37) Jun 25 16:22:48.941146 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:22:48.941169 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:22:48.941190 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:22:48.941212 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:22:48.941234 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:22:48.941256 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:22:48.941278 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:22:48.941299 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:22:48.941322 kernel: loop: module loaded Jun 25 16:22:48.941343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:22:48.941366 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:22:48.941384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:22:48.941469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:22:48.941581 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:22:48.941602 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:22:48.941642 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:22:48.941666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:22:48.941685 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:22:48.941705 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:22:48.941843 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:22:48.941867 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:22:48.941939 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:22:48.942037 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:22:48.942118 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:22:48.942142 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:22:48.942230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:22:48.942441 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:22:48.942790 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:22:48.942820 systemd-journald[1485]: Journal started Jun 25 16:22:48.942897 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec2cc1a654d50049a0d96e89888c7e33) is 4.8M, max 38.6M, 33.8M free. Jun 25 16:22:47.433000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:22:47.665000 audit: BPF prog-id=10 op=LOAD Jun 25 16:22:47.665000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:22:47.665000 audit: BPF prog-id=11 op=LOAD Jun 25 16:22:47.665000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:22:48.503000 audit: BPF prog-id=12 op=LOAD Jun 25 16:22:48.503000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:22:48.503000 audit: BPF prog-id=13 op=LOAD Jun 25 16:22:48.503000 audit: BPF prog-id=14 op=LOAD Jun 25 16:22:48.503000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:22:48.503000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:22:48.504000 audit: BPF prog-id=15 op=LOAD Jun 25 16:22:48.504000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:22:48.504000 audit: BPF prog-id=16 op=LOAD Jun 25 16:22:48.504000 audit: BPF prog-id=17 op=LOAD Jun 25 16:22:48.504000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:22:48.504000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:22:48.505000 audit: BPF prog-id=18 op=LOAD Jun 25 16:22:48.505000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:22:48.505000 audit: BPF prog-id=19 op=LOAD Jun 25 16:22:48.505000 audit: BPF prog-id=20 op=LOAD Jun 25 16:22:48.505000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:22:48.505000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:22:48.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.510000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:22:48.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.945876 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:22:48.742000 audit: BPF prog-id=21 op=LOAD Jun 25 16:22:48.742000 audit: BPF prog-id=22 op=LOAD Jun 25 16:22:48.950580 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:22:48.742000 audit: BPF prog-id=23 op=LOAD Jun 25 16:22:48.742000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:22:48.742000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:22:48.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.916000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:22:48.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.916000 audit[1485]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffca13dd8c0 a2=4000 a3=7ffca13dd95c items=0 ppid=1 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:48.916000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:22:48.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.491585 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:22:48.491598 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 25 16:22:48.506789 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:22:48.954537 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:22:48.975682 kernel: kauditd_printk_skb: 91 callbacks suppressed Jun 25 16:22:48.975765 kernel: audit: type=1130 audit(1719332568.965:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.975798 kernel: audit: type=1130 audit(1719332568.970:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.964392 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:22:48.966411 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:22:48.971283 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:22:48.988517 kernel: ACPI: bus type drm_connector registered Jun 25 16:22:48.983011 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:22:49.008299 kernel: audit: type=1130 audit(1719332568.994:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:49.008372 kernel: audit: type=1131 audit(1719332568.994:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:49.008610 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec2cc1a654d50049a0d96e89888c7e33 is 110.036ms for 1100 entries. Jun 25 16:22:49.008610 systemd-journald[1485]: System Journal (/var/log/journal/ec2cc1a654d50049a0d96e89888c7e33) is 8.0M, max 195.6M, 187.6M free. Jun 25 16:22:49.150189 systemd-journald[1485]: Received client request to flush runtime journal. Jun 25 16:22:49.150293 kernel: audit: type=1130 audit(1719332569.047:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:49.150342 kernel: audit: type=1130 audit(1719332569.079:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:49.150376 kernel: audit: type=1130 audit(1719332569.101:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:49.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:49.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:49.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.993148 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:22:49.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:48.993349 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:22:49.157857 kernel: audit: type=1130 audit(1719332569.152:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:49.158123 udevadm[1515]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 16:22:49.046646 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:22:49.078763 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:22:49.086891 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:22:49.098127 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:22:49.110940 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:22:49.152253 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:22:49.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:49.164496 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:22:49.169871 kernel: audit: type=1130 audit(1719332569.165:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:50.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:50.027000 audit: BPF prog-id=24 op=LOAD Jun 25 16:22:50.028000 audit: BPF prog-id=25 op=LOAD Jun 25 16:22:50.028000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:22:50.028000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:22:50.026670 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:22:50.054811 kernel: audit: type=1130 audit(1719332570.027:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:50.044925 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:22:50.099690 systemd-udevd[1519]: Using default interface naming scheme 'v252'. Jun 25 16:22:50.157065 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:22:50.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:50.160000 audit: BPF prog-id=26 op=LOAD Jun 25 16:22:50.166019 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:22:50.190000 audit: BPF prog-id=27 op=LOAD Jun 25 16:22:50.190000 audit: BPF prog-id=28 op=LOAD Jun 25 16:22:50.190000 audit: BPF prog-id=29 op=LOAD Jun 25 16:22:50.195891 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:22:50.247750 (udev-worker)[1521]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:22:50.273661 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1528) Jun 25 16:22:50.278249 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:22:50.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:50.292342 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:22:50.401263 systemd-networkd[1527]: lo: Link UP Jun 25 16:22:50.401641 systemd-networkd[1527]: lo: Gained carrier Jun 25 16:22:50.402405 systemd-networkd[1527]: Enumeration completed Jun 25 16:22:50.402697 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:22:50.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:50.406897 systemd-networkd[1527]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:22:50.407078 systemd-networkd[1527]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:22:50.407851 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:22:50.421596 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:22:50.420467 systemd-networkd[1527]: eth0: Link UP Jun 25 16:22:50.420704 systemd-networkd[1527]: eth0: Gained carrier Jun 25 16:22:50.420725 systemd-networkd[1527]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:22:50.425650 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 16:22:50.430789 systemd-networkd[1527]: eth0: DHCPv4 address 172.31.29.32/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 16:22:50.433333 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:22:50.433429 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jun 25 16:22:50.434084 kernel: ACPI: button: Sleep Button [SLPF] Jun 25 16:22:50.453671 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jun 25 16:22:50.475656 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1528) Jun 25 16:22:50.494685 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jun 25 16:22:50.535657 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:22:50.673792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 16:22:50.741338 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:22:50.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:50.746969 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:22:50.783924 lvm[1634]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:22:50.816515 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:22:50.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:50.818282 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:22:50.827999 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:22:50.841742 lvm[1635]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:22:50.873118 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:22:50.874866 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:22:50.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:50.878590 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:22:50.879055 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:22:50.882994 systemd[1]: Reached target machines.target - Containers. Jun 25 16:22:50.892553 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:22:50.894222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:22:50.894295 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:50.896434 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:22:50.899538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:22:50.908939 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:22:50.913476 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:22:50.916740 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1637 (bootctl) Jun 25 16:22:50.920861 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:22:50.957715 kernel: loop0: detected capacity change from 0 to 60984 Jun 25 16:22:50.988230 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:22:50.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:51.111728 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:22:51.149645 kernel: loop1: detected capacity change from 0 to 139360 Jun 25 16:22:51.210874 systemd-fsck[1645]: fsck.fat 4.2 (2021-01-31) Jun 25 16:22:51.210874 systemd-fsck[1645]: /dev/nvme0n1p1: 808 files, 120378/258078 clusters Jun 25 16:22:51.214472 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:22:51.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:51.219815 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:22:51.271404 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:22:51.313711 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:22:51.315691 kernel: loop2: detected capacity change from 0 to 80584 Jun 25 16:22:51.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:51.454675 kernel: loop3: detected capacity change from 0 to 209816 Jun 25 16:22:51.637663 kernel: loop4: detected capacity change from 0 to 60984 Jun 25 16:22:51.665653 kernel: loop5: detected capacity change from 0 to 139360 Jun 25 16:22:51.691031 kernel: loop6: detected capacity change from 0 to 80584 Jun 25 16:22:51.712179 kernel: loop7: detected capacity change from 0 to 209816 Jun 25 16:22:51.742215 (sd-sysext)[1664]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 25 16:22:51.743182 (sd-sysext)[1664]: Merged extensions into '/usr'. Jun 25 16:22:51.745086 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:22:51.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:51.749898 systemd[1]: Starting ensure-sysext.service... Jun 25 16:22:51.753003 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:22:51.796810 systemd[1]: Reloading. Jun 25 16:22:51.801182 systemd-tmpfiles[1666]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:22:51.821511 systemd-tmpfiles[1666]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:22:51.830836 systemd-tmpfiles[1666]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:22:51.836957 systemd-tmpfiles[1666]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:22:51.943759 systemd-networkd[1527]: eth0: Gained IPv6LL Jun 25 16:22:52.214728 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:22:52.312590 ldconfig[1636]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:22:52.390749 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:22:52.402000 audit: BPF prog-id=30 op=LOAD Jun 25 16:22:52.402000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:22:52.403000 audit: BPF prog-id=31 op=LOAD Jun 25 16:22:52.403000 audit: BPF prog-id=32 op=LOAD Jun 25 16:22:52.403000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:22:52.403000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:22:52.404000 audit: BPF prog-id=33 op=LOAD Jun 25 16:22:52.404000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:22:52.404000 audit: BPF prog-id=34 op=LOAD Jun 25 16:22:52.404000 audit: BPF prog-id=35 op=LOAD Jun 25 16:22:52.404000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:22:52.404000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:22:52.405000 audit: BPF prog-id=36 op=LOAD Jun 25 16:22:52.405000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:22:52.407000 audit: BPF prog-id=37 op=LOAD Jun 25 16:22:52.408000 audit: BPF prog-id=38 op=LOAD Jun 25 16:22:52.408000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:22:52.408000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:22:52.417125 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:22:52.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.419302 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:22:52.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.421170 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:22:52.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.429477 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:22:52.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.435811 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:22:52.440723 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:22:52.442000 audit: BPF prog-id=39 op=LOAD Jun 25 16:22:52.444640 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:22:52.448455 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:22:52.458000 audit: BPF prog-id=40 op=LOAD Jun 25 16:22:52.467037 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:22:52.474876 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:22:52.485729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:52.486291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:22:52.489760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:22:52.493639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:22:52.497945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:22:52.499238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:22:52.499453 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:52.499907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:52.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.502244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:22:52.502463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:22:52.506736 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:52.507170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:22:52.516430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:22:52.517898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:22:52.518113 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:52.518303 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:52.524860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:52.525347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:22:52.531067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:22:52.532853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:22:52.533072 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:52.533366 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:52.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.544547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:22:52.544791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:22:52.546830 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:22:52.547880 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:22:52.554000 audit[1741]: SYSTEM_BOOT pid=1741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.557102 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:22:52.572983 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:22:52.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.575573 systemd[1]: Finished ensure-sysext.service. Jun 25 16:22:52.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.584208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:22:52.584395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:22:52.588509 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:22:52.588779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:22:52.597959 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:22:52.598233 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:22:52.600908 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:22:52.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.616496 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:22:52.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:52.639000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:22:52.639000 audit[1761]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea4b3dc50 a2=420 a3=0 items=0 ppid=1735 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:52.639000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:22:52.640337 augenrules[1761]: No rules Jun 25 16:22:52.640950 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:22:52.645294 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:22:52.646817 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:22:52.683036 systemd-resolved[1739]: Positive Trust Anchors: Jun 25 16:22:52.683783 systemd-resolved[1739]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:22:52.683935 systemd-resolved[1739]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:22:52.705403 systemd-resolved[1739]: Defaulting to hostname 'linux'. Jun 25 16:22:52.708725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:22:52.710860 systemd[1]: Reached target network.target - Network. Jun 25 16:22:52.712407 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:22:52.714105 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:22:52.715479 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:22:52.717115 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:22:52.718971 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:22:52.720789 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:22:52.722175 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:22:52.723744 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:22:52.723776 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:22:52.725075 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:22:52.726581 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:22:52.727969 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:22:52.729603 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:22:52.731655 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:22:52.741510 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:22:52.749398 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:22:52.751985 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:52.752949 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:22:52.754477 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:22:52.755915 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:22:52.758618 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:22:52.759030 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:22:52.768657 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:22:52.774855 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 16:22:52.780090 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:22:52.785713 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:22:52.791075 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:22:52.792927 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:22:52.802053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:52.807644 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:22:52.812555 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:22:52.818319 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:22:52.828219 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 16:22:52.838418 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:22:52.852075 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:22:52.872078 jq[1772]: false Jun 25 16:22:52.860252 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:22:52.868388 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:52.868475 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:22:52.869449 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:22:52.873966 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:22:52.879786 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:22:52.888560 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:22:52.888835 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:22:52.901426 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:22:52.915774 jq[1787]: true Jun 25 16:22:52.901709 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:22:53.714817 systemd-timesyncd[1740]: Contacted time server 65.100.46.166:123 (0.flatcar.pool.ntp.org). Jun 25 16:22:53.716295 systemd-timesyncd[1740]: Initial clock synchronization to Tue 2024-06-25 16:22:53.714211 UTC. Jun 25 16:22:53.760766 systemd-resolved[1739]: Clock change detected. Flushing caches. Jun 25 16:22:53.783410 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 16:22:53.788512 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 25 16:22:53.802702 tar[1790]: linux-amd64/helm Jun 25 16:22:53.827277 jq[1793]: true Jun 25 16:22:53.830211 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:22:53.836856 dbus-daemon[1771]: [system] SELinux support is enabled Jun 25 16:22:53.852229 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:22:53.857876 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:22:53.857916 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:22:53.860035 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:22:53.860057 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:22:53.879901 extend-filesystems[1773]: Found loop4 Jun 25 16:22:53.883164 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:22:53.883398 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:22:53.907548 extend-filesystems[1773]: Found loop5 Jun 25 16:22:53.909658 update_engine[1785]: I0625 16:22:53.909586 1785 main.cc:92] Flatcar Update Engine starting Jun 25 16:22:53.912950 extend-filesystems[1773]: Found loop6 Jun 25 16:22:53.914168 extend-filesystems[1773]: Found loop7 Jun 25 16:22:53.922031 extend-filesystems[1773]: Found nvme0n1 Jun 25 16:22:53.924148 extend-filesystems[1773]: Found nvme0n1p1 Jun 25 16:22:53.931013 extend-filesystems[1773]: Found nvme0n1p2 Jun 25 16:22:53.932306 extend-filesystems[1773]: Found nvme0n1p3 Jun 25 16:22:53.938259 extend-filesystems[1773]: Found usr Jun 25 16:22:53.939538 extend-filesystems[1773]: Found nvme0n1p4 Jun 25 16:22:53.955894 extend-filesystems[1773]: Found nvme0n1p6 Jun 25 16:22:53.955894 extend-filesystems[1773]: Found nvme0n1p7 Jun 25 16:22:53.955894 extend-filesystems[1773]: Found nvme0n1p9 Jun 25 16:22:53.955894 extend-filesystems[1773]: Checking size of /dev/nvme0n1p9 Jun 25 16:22:53.966214 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 16:22:53.957453 dbus-daemon[1771]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1527 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 16:22:54.026450 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:22:54.039116 update_engine[1785]: I0625 16:22:54.026707 1785 update_check_scheduler.cc:74] Next update check in 7m27s Jun 25 16:22:54.041289 extend-filesystems[1773]: Resized partition /dev/nvme0n1p9 Jun 25 16:22:54.048192 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:22:54.066174 amazon-ssm-agent[1809]: Initializing new seelog logger Jun 25 16:22:54.066174 amazon-ssm-agent[1809]: New Seelog Logger Creation Complete Jun 25 16:22:54.066174 amazon-ssm-agent[1809]: 2024/06/25 16:22:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:22:54.066174 amazon-ssm-agent[1809]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:22:54.066174 amazon-ssm-agent[1809]: 2024/06/25 16:22:54 processing appconfig overrides Jun 25 16:22:54.066174 amazon-ssm-agent[1809]: 2024/06/25 16:22:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:22:54.066174 amazon-ssm-agent[1809]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:22:54.066174 amazon-ssm-agent[1809]: 2024/06/25 16:22:54 processing appconfig overrides Jun 25 16:22:54.077699 amazon-ssm-agent[1809]: 2024/06/25 16:22:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:22:54.077699 amazon-ssm-agent[1809]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:22:54.077699 amazon-ssm-agent[1809]: 2024/06/25 16:22:54 processing appconfig overrides Jun 25 16:22:54.077699 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO Proxy environment variables: Jun 25 16:22:54.086218 amazon-ssm-agent[1809]: 2024/06/25 16:22:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:22:54.086218 amazon-ssm-agent[1809]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:22:54.088058 amazon-ssm-agent[1809]: 2024/06/25 16:22:54 processing appconfig overrides Jun 25 16:22:54.099674 extend-filesystems[1830]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:22:54.128977 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 25 16:22:54.240893 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO no_proxy: Jun 25 16:22:54.245268 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 25 16:22:54.288307 systemd-logind[1784]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:22:54.288774 systemd-logind[1784]: Watching system buttons on /dev/input/event2 (Sleep Button) Jun 25 16:22:54.288893 systemd-logind[1784]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:22:54.291441 extend-filesystems[1830]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 25 16:22:54.291441 extend-filesystems[1830]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 16:22:54.291441 extend-filesystems[1830]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 25 16:22:54.299797 extend-filesystems[1773]: Resized filesystem in /dev/nvme0n1p9 Jun 25 16:22:54.291944 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:22:54.292298 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:22:54.301265 systemd-logind[1784]: New seat seat0. Jun 25 16:22:54.319013 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:22:54.326391 bash[1839]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:22:54.327501 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:22:54.336531 systemd[1]: Starting sshkeys.service... Jun 25 16:22:54.340717 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO https_proxy: Jun 25 16:22:54.360705 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 16:22:54.373746 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 16:22:54.443336 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO http_proxy: Jun 25 16:22:54.587764 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO Checking if agent identity type OnPrem can be assumed Jun 25 16:22:54.610778 coreos-metadata[1770]: Jun 25 16:22:54.610 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 16:22:54.615316 coreos-metadata[1770]: Jun 25 16:22:54.615 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 25 16:22:54.616246 coreos-metadata[1770]: Jun 25 16:22:54.616 INFO Fetch successful Jun 25 16:22:54.616346 coreos-metadata[1770]: Jun 25 16:22:54.616 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 25 16:22:54.616974 coreos-metadata[1770]: Jun 25 16:22:54.616 INFO Fetch successful Jun 25 16:22:54.617059 coreos-metadata[1770]: Jun 25 16:22:54.616 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 25 16:22:54.617891 coreos-metadata[1770]: Jun 25 16:22:54.617 INFO Fetch successful Jun 25 16:22:54.617998 coreos-metadata[1770]: Jun 25 16:22:54.617 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 25 16:22:54.618660 coreos-metadata[1770]: Jun 25 16:22:54.618 INFO Fetch successful Jun 25 16:22:54.618823 coreos-metadata[1770]: Jun 25 16:22:54.618 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 25 16:22:54.619541 coreos-metadata[1770]: Jun 25 16:22:54.619 INFO Fetch failed with 404: resource not found Jun 25 16:22:54.619650 coreos-metadata[1770]: Jun 25 16:22:54.619 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 25 16:22:54.620271 coreos-metadata[1770]: Jun 25 16:22:54.620 INFO Fetch successful Jun 25 16:22:54.620339 coreos-metadata[1770]: Jun 25 16:22:54.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 25 16:22:54.623429 coreos-metadata[1770]: Jun 25 16:22:54.623 INFO Fetch successful Jun 25 16:22:54.623544 coreos-metadata[1770]: Jun 25 16:22:54.623 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 25 16:22:54.626464 coreos-metadata[1770]: Jun 25 16:22:54.626 INFO Fetch successful Jun 25 16:22:54.626554 coreos-metadata[1770]: Jun 25 16:22:54.626 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 25 16:22:54.631742 coreos-metadata[1770]: Jun 25 16:22:54.631 INFO Fetch successful Jun 25 16:22:54.631742 coreos-metadata[1770]: Jun 25 16:22:54.631 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 25 16:22:54.636734 coreos-metadata[1770]: Jun 25 16:22:54.636 INFO Fetch successful Jun 25 16:22:54.675163 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 16:22:54.677632 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:22:54.703552 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1845) Jun 25 16:22:54.720352 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO Checking if agent identity type EC2 can be assumed Jun 25 16:22:54.733318 dbus-daemon[1771]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 16:22:54.733500 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 16:22:54.736023 dbus-daemon[1771]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1819 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 16:22:54.741534 locksmithd[1824]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:22:54.747547 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 16:22:54.770502 polkitd[1876]: Started polkitd version 121 Jun 25 16:22:54.825274 polkitd[1876]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 16:22:54.825373 polkitd[1876]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 16:22:54.827126 polkitd[1876]: Finished loading, compiling and executing 2 rules Jun 25 16:22:54.827850 dbus-daemon[1771]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 16:22:54.828197 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 16:22:54.828519 polkitd[1876]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 16:22:54.850216 systemd-hostnamed[1819]: Hostname set to (transient) Jun 25 16:22:54.850332 systemd-resolved[1739]: System hostname changed to 'ip-172-31-29-32'. Jun 25 16:22:54.881219 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO Agent will take identity from EC2 Jun 25 16:22:54.981687 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 16:22:55.080947 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 16:22:55.180522 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 16:22:55.283140 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 25 16:22:55.382493 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jun 25 16:22:55.435964 coreos-metadata[1852]: Jun 25 16:22:55.432 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 16:22:55.451822 coreos-metadata[1852]: Jun 25 16:22:55.451 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 25 16:22:55.458856 coreos-metadata[1852]: Jun 25 16:22:55.458 INFO Fetch successful Jun 25 16:22:55.459113 coreos-metadata[1852]: Jun 25 16:22:55.459 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 16:22:55.460537 coreos-metadata[1852]: Jun 25 16:22:55.460 INFO Fetch successful Jun 25 16:22:55.469812 unknown[1852]: wrote ssh authorized keys file for user: core Jun 25 16:22:55.482775 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO [amazon-ssm-agent] Starting Core Agent Jun 25 16:22:55.509840 update-ssh-keys[1977]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:22:55.510546 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 16:22:55.515600 systemd[1]: Finished sshkeys.service. Jun 25 16:22:55.520103 containerd[1794]: time="2024-06-25T16:22:55.519854560Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:22:55.583056 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 25 16:22:55.671746 sshd_keygen[1805]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:22:55.683309 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO [Registrar] Starting registrar module Jun 25 16:22:55.749425 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:22:55.759522 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:22:55.773613 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:22:55.775571 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:22:55.785889 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:22:55.804740 amazon-ssm-agent[1809]: 2024-06-25 16:22:54 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 25 16:22:55.822815 containerd[1794]: time="2024-06-25T16:22:55.822661098Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:22:55.822815 containerd[1794]: time="2024-06-25T16:22:55.822732015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:55.830574 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:22:55.842426 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:22:55.850560 containerd[1794]: time="2024-06-25T16:22:55.850502173Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:22:55.850683 containerd[1794]: time="2024-06-25T16:22:55.850558790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:55.851301 containerd[1794]: time="2024-06-25T16:22:55.851254330Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:22:55.851301 containerd[1794]: time="2024-06-25T16:22:55.851301063Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:22:55.851448 containerd[1794]: time="2024-06-25T16:22:55.851424922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:55.851534 containerd[1794]: time="2024-06-25T16:22:55.851504506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:22:55.851580 containerd[1794]: time="2024-06-25T16:22:55.851537746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:55.851667 containerd[1794]: time="2024-06-25T16:22:55.851642786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:55.852403 containerd[1794]: time="2024-06-25T16:22:55.852372194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:55.852491 containerd[1794]: time="2024-06-25T16:22:55.852414458Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:22:55.852491 containerd[1794]: time="2024-06-25T16:22:55.852435340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:55.852683 containerd[1794]: time="2024-06-25T16:22:55.852651076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:22:55.852735 containerd[1794]: time="2024-06-25T16:22:55.852686638Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:22:55.853848 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:22:55.858514 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:22:55.877858 containerd[1794]: time="2024-06-25T16:22:55.852763008Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:22:55.878100 containerd[1794]: time="2024-06-25T16:22:55.877879521Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:22:55.898526 containerd[1794]: time="2024-06-25T16:22:55.898456061Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:22:55.898677 containerd[1794]: time="2024-06-25T16:22:55.898534899Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:22:55.898677 containerd[1794]: time="2024-06-25T16:22:55.898570135Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:22:55.898905 containerd[1794]: time="2024-06-25T16:22:55.898625907Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:22:55.898905 containerd[1794]: time="2024-06-25T16:22:55.898733658Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:22:55.898905 containerd[1794]: time="2024-06-25T16:22:55.898751898Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:22:55.898905 containerd[1794]: time="2024-06-25T16:22:55.898880377Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:22:55.899168 containerd[1794]: time="2024-06-25T16:22:55.899076095Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:22:55.899236 containerd[1794]: time="2024-06-25T16:22:55.899186646Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:22:55.899236 containerd[1794]: time="2024-06-25T16:22:55.899208262Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:22:55.899236 containerd[1794]: time="2024-06-25T16:22:55.899230512Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:22:55.899349 containerd[1794]: time="2024-06-25T16:22:55.899253318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:22:55.899349 containerd[1794]: time="2024-06-25T16:22:55.899281050Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:22:55.899349 containerd[1794]: time="2024-06-25T16:22:55.899303210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:22:55.899349 containerd[1794]: time="2024-06-25T16:22:55.899325605Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:22:55.899496 containerd[1794]: time="2024-06-25T16:22:55.899367125Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:22:55.899496 containerd[1794]: time="2024-06-25T16:22:55.899390167Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:22:55.899496 containerd[1794]: time="2024-06-25T16:22:55.899410170Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:22:55.899496 containerd[1794]: time="2024-06-25T16:22:55.899428903Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:22:55.899644 containerd[1794]: time="2024-06-25T16:22:55.899576323Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:22:55.900086 containerd[1794]: time="2024-06-25T16:22:55.900062557Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:22:55.900146 containerd[1794]: time="2024-06-25T16:22:55.900105164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900146 containerd[1794]: time="2024-06-25T16:22:55.900129799Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:22:55.900242 containerd[1794]: time="2024-06-25T16:22:55.900165934Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:22:55.900286 containerd[1794]: time="2024-06-25T16:22:55.900247975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900286 containerd[1794]: time="2024-06-25T16:22:55.900270638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900380 containerd[1794]: time="2024-06-25T16:22:55.900361821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900430 containerd[1794]: time="2024-06-25T16:22:55.900389424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900430 containerd[1794]: time="2024-06-25T16:22:55.900412435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900509 containerd[1794]: time="2024-06-25T16:22:55.900450331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900509 containerd[1794]: time="2024-06-25T16:22:55.900472159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900509 containerd[1794]: time="2024-06-25T16:22:55.900491275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900626 containerd[1794]: time="2024-06-25T16:22:55.900514598Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:22:55.900734 containerd[1794]: time="2024-06-25T16:22:55.900665234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900793 containerd[1794]: time="2024-06-25T16:22:55.900738575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900793 containerd[1794]: time="2024-06-25T16:22:55.900760088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900793 containerd[1794]: time="2024-06-25T16:22:55.900780672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900982 containerd[1794]: time="2024-06-25T16:22:55.900802483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900982 containerd[1794]: time="2024-06-25T16:22:55.900893211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900982 containerd[1794]: time="2024-06-25T16:22:55.900927789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.900982 containerd[1794]: time="2024-06-25T16:22:55.900946685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:22:55.901403 containerd[1794]: time="2024-06-25T16:22:55.901319770Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:22:55.901665 containerd[1794]: time="2024-06-25T16:22:55.901415455Z" level=info msg="Connect containerd service" Jun 25 16:22:55.901665 containerd[1794]: time="2024-06-25T16:22:55.901455279Z" level=info msg="using legacy CRI server" Jun 25 16:22:55.901665 containerd[1794]: time="2024-06-25T16:22:55.901466635Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:22:55.901665 containerd[1794]: time="2024-06-25T16:22:55.901564593Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:22:55.902429 containerd[1794]: time="2024-06-25T16:22:55.902393747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:22:55.902576 containerd[1794]: time="2024-06-25T16:22:55.902464953Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:22:55.902650 containerd[1794]: time="2024-06-25T16:22:55.902491519Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:22:55.902650 containerd[1794]: time="2024-06-25T16:22:55.902630119Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:22:55.902730 containerd[1794]: time="2024-06-25T16:22:55.902652063Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:22:55.902730 containerd[1794]: time="2024-06-25T16:22:55.902579409Z" level=info msg="Start subscribing containerd event" Jun 25 16:22:55.902804 containerd[1794]: time="2024-06-25T16:22:55.902730357Z" level=info msg="Start recovering state" Jun 25 16:22:55.902845 containerd[1794]: time="2024-06-25T16:22:55.902810079Z" level=info msg="Start event monitor" Jun 25 16:22:55.902845 containerd[1794]: time="2024-06-25T16:22:55.902830093Z" level=info msg="Start snapshots syncer" Jun 25 16:22:55.902950 containerd[1794]: time="2024-06-25T16:22:55.902844351Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:22:55.902950 containerd[1794]: time="2024-06-25T16:22:55.902856997Z" level=info msg="Start streaming server" Jun 25 16:22:55.903566 containerd[1794]: time="2024-06-25T16:22:55.903539332Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:22:55.903676 containerd[1794]: time="2024-06-25T16:22:55.903615909Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:22:55.903879 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:22:55.958654 containerd[1794]: time="2024-06-25T16:22:55.958600131Z" level=info msg="containerd successfully booted in 0.447092s" Jun 25 16:22:56.530197 amazon-ssm-agent[1809]: 2024-06-25 16:22:56 INFO [EC2Identity] EC2 registration was successful. Jun 25 16:22:56.549800 tar[1790]: linux-amd64/LICENSE Jun 25 16:22:56.550229 tar[1790]: linux-amd64/README.md Jun 25 16:22:56.558808 amazon-ssm-agent[1809]: 2024-06-25 16:22:56 INFO [CredentialRefresher] credentialRefresher has started Jun 25 16:22:56.558808 amazon-ssm-agent[1809]: 2024-06-25 16:22:56 INFO [CredentialRefresher] Starting credentials refresher loop Jun 25 16:22:56.558808 amazon-ssm-agent[1809]: 2024-06-25 16:22:56 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 25 16:22:56.559918 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:22:56.631291 amazon-ssm-agent[1809]: 2024-06-25 16:22:56 INFO [CredentialRefresher] Next credential rotation will be in 31.19999464105 minutes Jun 25 16:22:56.633787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:56.636798 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:22:56.643522 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:22:56.657270 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:22:56.657742 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:22:56.659349 systemd[1]: Startup finished in 800ms (kernel) + 7.676s (initrd) + 8.532s (userspace) = 17.009s. Jun 25 16:22:57.528193 kubelet[1998]: E0625 16:22:57.528110 1998 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:22:57.531419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:22:57.531788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:22:57.532194 systemd[1]: kubelet.service: Consumed 1.058s CPU time. Jun 25 16:22:57.573108 amazon-ssm-agent[1809]: 2024-06-25 16:22:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 25 16:22:57.674328 amazon-ssm-agent[1809]: 2024-06-25 16:22:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2006) started Jun 25 16:22:57.775699 amazon-ssm-agent[1809]: 2024-06-25 16:22:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 25 16:23:01.545768 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:23:01.558474 systemd[1]: Started sshd@0-172.31.29.32:22-139.178.89.65:33918.service - OpenSSH per-connection server daemon (139.178.89.65:33918). Jun 25 16:23:01.741181 sshd[2018]: Accepted publickey for core from 139.178.89.65 port 33918 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:23:01.744047 sshd[2018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:01.758196 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:23:01.770455 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:23:01.775404 systemd-logind[1784]: New session 1 of user core. Jun 25 16:23:01.795780 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:23:01.804451 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:23:01.817248 (systemd)[2021]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:01.982055 systemd[2021]: Queued start job for default target default.target. Jun 25 16:23:01.989473 systemd[2021]: Reached target paths.target - Paths. Jun 25 16:23:01.989510 systemd[2021]: Reached target sockets.target - Sockets. Jun 25 16:23:01.989528 systemd[2021]: Reached target timers.target - Timers. Jun 25 16:23:01.989544 systemd[2021]: Reached target basic.target - Basic System. Jun 25 16:23:01.989687 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:23:01.991062 systemd[2021]: Reached target default.target - Main User Target. Jun 25 16:23:01.991300 systemd[2021]: Startup finished in 160ms. Jun 25 16:23:01.991311 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:23:02.193337 systemd[1]: Started sshd@1-172.31.29.32:22-139.178.89.65:33932.service - OpenSSH per-connection server daemon (139.178.89.65:33932). Jun 25 16:23:02.350945 sshd[2030]: Accepted publickey for core from 139.178.89.65 port 33932 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:23:02.352426 sshd[2030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:02.359049 systemd-logind[1784]: New session 2 of user core. Jun 25 16:23:02.369161 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:23:02.495014 sshd[2030]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:02.499636 systemd[1]: sshd@1-172.31.29.32:22-139.178.89.65:33932.service: Deactivated successfully. Jun 25 16:23:02.504203 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:23:02.505181 systemd-logind[1784]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:23:02.506822 systemd-logind[1784]: Removed session 2. Jun 25 16:23:02.543128 systemd[1]: Started sshd@2-172.31.29.32:22-139.178.89.65:33942.service - OpenSSH per-connection server daemon (139.178.89.65:33942). Jun 25 16:23:02.720617 sshd[2036]: Accepted publickey for core from 139.178.89.65 port 33942 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:23:02.722345 sshd[2036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:02.732338 systemd-logind[1784]: New session 3 of user core. Jun 25 16:23:02.738282 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:23:02.858396 sshd[2036]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:02.862993 systemd[1]: sshd@2-172.31.29.32:22-139.178.89.65:33942.service: Deactivated successfully. Jun 25 16:23:02.864130 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:23:02.865142 systemd-logind[1784]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:23:02.866754 systemd-logind[1784]: Removed session 3. Jun 25 16:23:02.905445 systemd[1]: Started sshd@3-172.31.29.32:22-139.178.89.65:33944.service - OpenSSH per-connection server daemon (139.178.89.65:33944). Jun 25 16:23:03.065351 sshd[2042]: Accepted publickey for core from 139.178.89.65 port 33944 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:23:03.066931 sshd[2042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:03.073625 systemd-logind[1784]: New session 4 of user core. Jun 25 16:23:03.079169 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:23:03.202468 sshd[2042]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:03.207830 systemd[1]: sshd@3-172.31.29.32:22-139.178.89.65:33944.service: Deactivated successfully. Jun 25 16:23:03.208813 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:23:03.209478 systemd-logind[1784]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:23:03.210483 systemd-logind[1784]: Removed session 4. Jun 25 16:23:03.239803 systemd[1]: Started sshd@4-172.31.29.32:22-139.178.89.65:33952.service - OpenSSH per-connection server daemon (139.178.89.65:33952). Jun 25 16:23:03.396770 sshd[2048]: Accepted publickey for core from 139.178.89.65 port 33952 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:23:03.398560 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:03.405985 systemd-logind[1784]: New session 5 of user core. Jun 25 16:23:03.412216 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:23:03.558654 sudo[2051]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:23:03.559425 sudo[2051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:23:03.582238 sudo[2051]: pam_unix(sudo:session): session closed for user root Jun 25 16:23:03.605775 sshd[2048]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:03.611422 systemd[1]: sshd@4-172.31.29.32:22-139.178.89.65:33952.service: Deactivated successfully. Jun 25 16:23:03.613613 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:23:03.614611 systemd-logind[1784]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:23:03.616085 systemd-logind[1784]: Removed session 5. Jun 25 16:23:03.647827 systemd[1]: Started sshd@5-172.31.29.32:22-139.178.89.65:33962.service - OpenSSH per-connection server daemon (139.178.89.65:33962). Jun 25 16:23:03.806004 sshd[2055]: Accepted publickey for core from 139.178.89.65 port 33962 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:23:03.808099 sshd[2055]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:03.813928 systemd-logind[1784]: New session 6 of user core. Jun 25 16:23:03.821209 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:23:03.931950 sudo[2059]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:23:03.932332 sudo[2059]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:23:03.943678 sudo[2059]: pam_unix(sudo:session): session closed for user root Jun 25 16:23:03.956810 sudo[2058]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:23:03.957205 sudo[2058]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:23:03.980178 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:23:03.990000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:23:03.991486 kernel: kauditd_printk_skb: 60 callbacks suppressed Jun 25 16:23:03.991549 kernel: audit: type=1305 audit(1719332583.990:200): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:23:03.990000 audit[2062]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd617e9490 a2=420 a3=0 items=0 ppid=1 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:03.994692 auditctl[2062]: No rules Jun 25 16:23:03.997258 kernel: audit: type=1300 audit(1719332583.990:200): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd617e9490 a2=420 a3=0 items=0 ppid=1 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:03.997409 kernel: audit: type=1327 audit(1719332583.990:200): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:23:03.990000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:23:03.998548 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:23:03.998792 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:23:03.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.004021 kernel: audit: type=1131 audit(1719332583.997:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.005419 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:23:04.040197 augenrules[2079]: No rules Jun 25 16:23:04.040935 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:23:04.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.042000 audit[2058]: USER_END pid=2058 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.043128 sudo[2058]: pam_unix(sudo:session): session closed for user root Jun 25 16:23:04.048335 kernel: audit: type=1130 audit(1719332584.040:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.048408 kernel: audit: type=1106 audit(1719332584.042:203): pid=2058 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.048444 kernel: audit: type=1104 audit(1719332584.042:204): pid=2058 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.042000 audit[2058]: CRED_DISP pid=2058 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.066178 sshd[2055]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:04.067000 audit[2055]: USER_END pid=2055 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:04.067000 audit[2055]: CRED_DISP pid=2055 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:04.071587 systemd[1]: sshd@5-172.31.29.32:22-139.178.89.65:33962.service: Deactivated successfully. Jun 25 16:23:04.072737 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:23:04.076099 kernel: audit: type=1106 audit(1719332584.067:205): pid=2055 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:04.076190 kernel: audit: type=1104 audit(1719332584.067:206): pid=2055 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:04.076224 kernel: audit: type=1131 audit(1719332584.067:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.32:22-139.178.89.65:33962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.32:22-139.178.89.65:33962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.075432 systemd-logind[1784]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:23:04.077240 systemd-logind[1784]: Removed session 6. Jun 25 16:23:04.105861 systemd[1]: Started sshd@6-172.31.29.32:22-139.178.89.65:33970.service - OpenSSH per-connection server daemon (139.178.89.65:33970). Jun 25 16:23:04.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.32:22-139.178.89.65:33970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.262000 audit[2085]: USER_ACCT pid=2085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:04.263690 sshd[2085]: Accepted publickey for core from 139.178.89.65 port 33970 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:23:04.263000 audit[2085]: CRED_ACQ pid=2085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:04.264000 audit[2085]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd13346840 a2=3 a3=7fd7ed8e5480 items=0 ppid=1 pid=2085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:04.264000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:04.265506 sshd[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:04.271895 systemd-logind[1784]: New session 7 of user core. Jun 25 16:23:04.278197 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:23:04.283000 audit[2085]: USER_START pid=2085 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:04.285000 audit[2087]: CRED_ACQ pid=2087 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:04.377000 audit[2088]: USER_ACCT pid=2088 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.378000 audit[2088]: CRED_REFR pid=2088 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.378737 sudo[2088]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:23:04.379119 sudo[2088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:23:04.381000 audit[2088]: USER_START pid=2088 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:04.653795 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:23:05.378714 dockerd[2098]: time="2024-06-25T16:23:05.378654584Z" level=info msg="Starting up" Jun 25 16:23:05.447942 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1375423757-merged.mount: Deactivated successfully. Jun 25 16:23:05.645038 systemd[1]: var-lib-docker-metacopy\x2dcheck684427389-merged.mount: Deactivated successfully. Jun 25 16:23:05.667100 dockerd[2098]: time="2024-06-25T16:23:05.667041928Z" level=info msg="Loading containers: start." Jun 25 16:23:05.778000 audit[2129]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.778000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc577c0350 a2=0 a3=7f7301e85e90 items=0 ppid=2098 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.778000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:23:05.781000 audit[2131]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.781000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff2cb1b2b0 a2=0 a3=7f50fd753e90 items=0 ppid=2098 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.781000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:23:05.784000 audit[2133]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.784000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd78fe5af0 a2=0 a3=7f8cca4fce90 items=0 ppid=2098 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.784000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:23:05.787000 audit[2135]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.787000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd0b78b770 a2=0 a3=7fa4ea04ae90 items=0 ppid=2098 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.787000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:23:05.791000 audit[2137]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.791000 audit[2137]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd945d2fb0 a2=0 a3=7f51bda72e90 items=0 ppid=2098 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.791000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:23:05.794000 audit[2139]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.794000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcebbde490 a2=0 a3=7f853abe1e90 items=0 ppid=2098 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.794000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:23:05.811000 audit[2141]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.811000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea3b69180 a2=0 a3=7f172872ae90 items=0 ppid=2098 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.811000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:23:05.813000 audit[2143]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.813000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe2049b7a0 a2=0 a3=7f4368589e90 items=0 ppid=2098 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.813000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:23:05.818000 audit[2145]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.818000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffce31d1e40 a2=0 a3=7fb22ac25e90 items=0 ppid=2098 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.818000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:05.833000 audit[2149]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.833000 audit[2149]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcbba496d0 a2=0 a3=7f6d13249e90 items=0 ppid=2098 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.833000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:05.834000 audit[2150]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.834000 audit[2150]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc6345ec90 a2=0 a3=7f66e198be90 items=0 ppid=2098 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.834000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:05.846898 kernel: Initializing XFRM netlink socket Jun 25 16:23:05.895908 (udev-worker)[2109]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:23:05.957000 audit[2158]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:05.957000 audit[2158]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe365bad50 a2=0 a3=7f4c64b84e90 items=0 ppid=2098 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:05.957000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:23:06.003000 audit[2161]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.003000 audit[2161]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffd2ffbf70 a2=0 a3=7ff6a3d66e90 items=0 ppid=2098 pid=2161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.003000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:23:06.011000 audit[2165]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.011000 audit[2165]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffec4242570 a2=0 a3=7f63e5dfae90 items=0 ppid=2098 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.011000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:23:06.014000 audit[2167]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.014000 audit[2167]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd9f254cd0 a2=0 a3=7f196d39de90 items=0 ppid=2098 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.014000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:23:06.017000 audit[2169]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.017000 audit[2169]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe55419cb0 a2=0 a3=7f96a52eae90 items=0 ppid=2098 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.017000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:23:06.020000 audit[2171]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.020000 audit[2171]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff38e83450 a2=0 a3=7f84b1829e90 items=0 ppid=2098 pid=2171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.020000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:23:06.023000 audit[2173]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.023000 audit[2173]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffea33892d0 a2=0 a3=7f91353cce90 items=0 ppid=2098 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.023000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:23:06.032000 audit[2176]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.032000 audit[2176]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe1359c230 a2=0 a3=7fac9e018e90 items=0 ppid=2098 pid=2176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.032000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:23:06.036000 audit[2178]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.036000 audit[2178]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff4c5fa1b0 a2=0 a3=7f39bccfbe90 items=0 ppid=2098 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.036000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:23:06.039000 audit[2180]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.039000 audit[2180]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffde54b5140 a2=0 a3=7f93850fbe90 items=0 ppid=2098 pid=2180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.039000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:23:06.042000 audit[2182]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.042000 audit[2182]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffce6787200 a2=0 a3=7f6d850d3e90 items=0 ppid=2098 pid=2182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.042000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:23:06.043892 systemd-networkd[1527]: docker0: Link UP Jun 25 16:23:06.072000 audit[2186]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.072000 audit[2186]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff29601f50 a2=0 a3=7fee32deee90 items=0 ppid=2098 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.072000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:06.075000 audit[2187]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:06.075000 audit[2187]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd6ad2b800 a2=0 a3=7f3891df8e90 items=0 ppid=2098 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.075000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:06.076743 dockerd[2098]: time="2024-06-25T16:23:06.076699401Z" level=info msg="Loading containers: done." Jun 25 16:23:06.274895 dockerd[2098]: time="2024-06-25T16:23:06.274817398Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:23:06.275143 dockerd[2098]: time="2024-06-25T16:23:06.275114517Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:23:06.275370 dockerd[2098]: time="2024-06-25T16:23:06.275344768Z" level=info msg="Daemon has completed initialization" Jun 25 16:23:06.327471 dockerd[2098]: time="2024-06-25T16:23:06.327386367Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:23:06.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:06.328321 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:23:06.443535 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3056999822-merged.mount: Deactivated successfully. Jun 25 16:23:07.659734 containerd[1794]: time="2024-06-25T16:23:07.659682638Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:23:07.728125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:23:07.730535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:07.731316 systemd[1]: kubelet.service: Consumed 1.058s CPU time. Jun 25 16:23:07.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:07.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:07.741465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:08.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:08.290162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:08.400528 kubelet[2235]: E0625 16:23:08.400468 2235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:23:08.404604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:23:08.404805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:23:08.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:23:08.657469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205627128.mount: Deactivated successfully. Jun 25 16:23:11.365831 containerd[1794]: time="2024-06-25T16:23:11.365776360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:11.367303 containerd[1794]: time="2024-06-25T16:23:11.367246143Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 16:23:11.369150 containerd[1794]: time="2024-06-25T16:23:11.369109156Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:11.371837 containerd[1794]: time="2024-06-25T16:23:11.371797943Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:11.377990 containerd[1794]: time="2024-06-25T16:23:11.377941204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:11.379786 containerd[1794]: time="2024-06-25T16:23:11.379735038Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.719997873s" Jun 25 16:23:11.380436 containerd[1794]: time="2024-06-25T16:23:11.379793205Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:23:11.410802 containerd[1794]: time="2024-06-25T16:23:11.410757374Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:23:14.072414 containerd[1794]: time="2024-06-25T16:23:14.072365886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:14.074601 containerd[1794]: time="2024-06-25T16:23:14.074532139Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 16:23:14.076005 containerd[1794]: time="2024-06-25T16:23:14.075965824Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:14.079256 containerd[1794]: time="2024-06-25T16:23:14.079218846Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:14.082520 containerd[1794]: time="2024-06-25T16:23:14.082482804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:14.084059 containerd[1794]: time="2024-06-25T16:23:14.084016048Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.673211162s" Jun 25 16:23:14.084214 containerd[1794]: time="2024-06-25T16:23:14.084189726Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:23:14.117759 containerd[1794]: time="2024-06-25T16:23:14.117724613Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:23:15.626354 containerd[1794]: time="2024-06-25T16:23:15.626294081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:15.627848 containerd[1794]: time="2024-06-25T16:23:15.627789120Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 16:23:15.629487 containerd[1794]: time="2024-06-25T16:23:15.629451785Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:15.632903 containerd[1794]: time="2024-06-25T16:23:15.632838964Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:15.636374 containerd[1794]: time="2024-06-25T16:23:15.636138198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:15.638129 containerd[1794]: time="2024-06-25T16:23:15.638079740Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.520078067s" Jun 25 16:23:15.638359 containerd[1794]: time="2024-06-25T16:23:15.638321332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:23:15.675423 containerd[1794]: time="2024-06-25T16:23:15.675381056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:23:17.222708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704840290.mount: Deactivated successfully. Jun 25 16:23:18.040525 containerd[1794]: time="2024-06-25T16:23:18.040376483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:18.042324 containerd[1794]: time="2024-06-25T16:23:18.042264567Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 16:23:18.044490 containerd[1794]: time="2024-06-25T16:23:18.044415316Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:18.048386 containerd[1794]: time="2024-06-25T16:23:18.048336050Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:18.050942 containerd[1794]: time="2024-06-25T16:23:18.050881538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:18.051913 containerd[1794]: time="2024-06-25T16:23:18.051848785Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.376389192s" Jun 25 16:23:18.052157 containerd[1794]: time="2024-06-25T16:23:18.052047602Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:23:18.079897 containerd[1794]: time="2024-06-25T16:23:18.079838058Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:23:18.476280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:23:18.482311 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:23:18.482426 kernel: audit: type=1130 audit(1719332598.475:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:18.482497 kernel: audit: type=1131 audit(1719332598.475:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:18.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:18.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:18.476537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:18.490555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:18.877960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114520677.mount: Deactivated successfully. Jun 25 16:23:18.944388 containerd[1794]: time="2024-06-25T16:23:18.944207765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:18.960150 containerd[1794]: time="2024-06-25T16:23:18.960072284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:23:18.985404 containerd[1794]: time="2024-06-25T16:23:18.985330423Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:19.030498 containerd[1794]: time="2024-06-25T16:23:19.030178891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:19.044991 containerd[1794]: time="2024-06-25T16:23:19.044248426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:19.051659 containerd[1794]: time="2024-06-25T16:23:19.051574169Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 971.578605ms" Jun 25 16:23:19.052426 containerd[1794]: time="2024-06-25T16:23:19.052354014Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:23:19.073437 kernel: audit: type=1130 audit(1719332599.068:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.069516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:19.131981 containerd[1794]: time="2024-06-25T16:23:19.131174018Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:23:19.230290 kubelet[2340]: E0625 16:23:19.230249 2340 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:23:19.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:23:19.238677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:23:19.239078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:23:19.243912 kernel: audit: type=1131 audit(1719332599.239:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:23:19.829107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1077824962.mount: Deactivated successfully. Jun 25 16:23:22.938142 containerd[1794]: time="2024-06-25T16:23:22.938074377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:22.944284 containerd[1794]: time="2024-06-25T16:23:22.944210634Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 16:23:22.956127 containerd[1794]: time="2024-06-25T16:23:22.955515274Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:22.968925 containerd[1794]: time="2024-06-25T16:23:22.968850480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:22.979618 containerd[1794]: time="2024-06-25T16:23:22.979559065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:22.981326 containerd[1794]: time="2024-06-25T16:23:22.981205539Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.849870905s" Jun 25 16:23:22.981574 containerd[1794]: time="2024-06-25T16:23:22.981547051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:23:23.019078 containerd[1794]: time="2024-06-25T16:23:23.019035684Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:23:23.666112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount912970817.mount: Deactivated successfully. Jun 25 16:23:24.584199 containerd[1794]: time="2024-06-25T16:23:24.584091055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:24.585919 containerd[1794]: time="2024-06-25T16:23:24.585798394Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 16:23:24.591398 containerd[1794]: time="2024-06-25T16:23:24.591329595Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:24.594583 containerd[1794]: time="2024-06-25T16:23:24.594541716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:24.597410 containerd[1794]: time="2024-06-25T16:23:24.597370026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:24.598403 containerd[1794]: time="2024-06-25T16:23:24.598348871Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.57908436s" Jun 25 16:23:24.598504 containerd[1794]: time="2024-06-25T16:23:24.598401192Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:23:24.883711 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 16:23:24.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:24.888001 kernel: audit: type=1131 audit(1719332604.883:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:24.917890 kernel: audit: type=1334 audit(1719332604.916:251): prog-id=46 op=UNLOAD Jun 25 16:23:24.918079 kernel: audit: type=1334 audit(1719332604.916:252): prog-id=45 op=UNLOAD Jun 25 16:23:24.916000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:23:24.916000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:23:24.916000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:23:24.919479 kernel: audit: type=1334 audit(1719332604.916:253): prog-id=44 op=UNLOAD Jun 25 16:23:28.702510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:28.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.708902 kernel: audit: type=1130 audit(1719332608.702:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.709021 kernel: audit: type=1131 audit(1719332608.702:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.720644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:28.766890 systemd[1]: Reloading. Jun 25 16:23:29.105250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:23:29.201000 audit: BPF prog-id=47 op=LOAD Jun 25 16:23:29.206601 kernel: audit: type=1334 audit(1719332609.201:256): prog-id=47 op=LOAD Jun 25 16:23:29.206835 kernel: audit: type=1334 audit(1719332609.203:257): prog-id=30 op=UNLOAD Jun 25 16:23:29.206900 kernel: audit: type=1334 audit(1719332609.203:258): prog-id=48 op=LOAD Jun 25 16:23:29.206935 kernel: audit: type=1334 audit(1719332609.203:259): prog-id=49 op=LOAD Jun 25 16:23:29.203000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:23:29.203000 audit: BPF prog-id=48 op=LOAD Jun 25 16:23:29.203000 audit: BPF prog-id=49 op=LOAD Jun 25 16:23:29.203000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:23:29.203000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:23:29.204000 audit: BPF prog-id=50 op=LOAD Jun 25 16:23:29.204000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:23:29.204000 audit: BPF prog-id=51 op=LOAD Jun 25 16:23:29.204000 audit: BPF prog-id=52 op=LOAD Jun 25 16:23:29.204000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:23:29.204000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:23:29.206000 audit: BPF prog-id=53 op=LOAD Jun 25 16:23:29.206000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:23:29.208000 audit: BPF prog-id=54 op=LOAD Jun 25 16:23:29.208000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:23:29.209000 audit: BPF prog-id=55 op=LOAD Jun 25 16:23:29.210000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:23:29.210000 audit: BPF prog-id=56 op=LOAD Jun 25 16:23:29.210000 audit: BPF prog-id=57 op=LOAD Jun 25 16:23:29.210000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:23:29.210000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:23:29.211000 audit: BPF prog-id=58 op=LOAD Jun 25 16:23:29.211000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:23:29.213000 audit: BPF prog-id=59 op=LOAD Jun 25 16:23:29.213000 audit: BPF prog-id=60 op=LOAD Jun 25 16:23:29.213000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:23:29.213000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:23:29.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:29.242374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:29.246304 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:29.246775 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:23:29.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:29.247077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:29.254559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:29.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:29.491259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:29.587505 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:23:29.588041 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:23:29.588157 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:23:29.588366 kubelet[2550]: I0625 16:23:29.588326 2550 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:23:30.192960 kubelet[2550]: I0625 16:23:30.192853 2550 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:23:30.192960 kubelet[2550]: I0625 16:23:30.192947 2550 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:23:30.193579 kubelet[2550]: I0625 16:23:30.193487 2550 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:23:30.268007 kubelet[2550]: E0625 16:23:30.267970 2550 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:30.268307 kubelet[2550]: I0625 16:23:30.268274 2550 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:23:30.295186 kubelet[2550]: I0625 16:23:30.295145 2550 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:23:30.298149 kubelet[2550]: I0625 16:23:30.298114 2550 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:23:30.298379 kubelet[2550]: I0625 16:23:30.298352 2550 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:23:30.298962 kubelet[2550]: I0625 16:23:30.298937 2550 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:23:30.298962 kubelet[2550]: I0625 16:23:30.298965 2550 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:23:30.302456 kubelet[2550]: I0625 16:23:30.302419 2550 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:23:30.305181 kubelet[2550]: W0625 16:23:30.305126 2550 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-32&limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:30.305356 kubelet[2550]: E0625 16:23:30.305345 2550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-32&limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:30.306296 kubelet[2550]: I0625 16:23:30.306269 2550 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:23:30.306386 kubelet[2550]: I0625 16:23:30.306305 2550 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:23:30.306386 kubelet[2550]: I0625 16:23:30.306345 2550 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:23:30.306386 kubelet[2550]: I0625 16:23:30.306368 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:23:30.309355 kubelet[2550]: W0625 16:23:30.309082 2550 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:30.309355 kubelet[2550]: E0625 16:23:30.309139 2550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:30.309587 kubelet[2550]: I0625 16:23:30.309521 2550 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:23:30.319804 kubelet[2550]: W0625 16:23:30.319777 2550 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:23:30.322632 kubelet[2550]: I0625 16:23:30.322599 2550 server.go:1232] "Started kubelet" Jun 25 16:23:30.330271 kubelet[2550]: E0625 16:23:30.330138 2550 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-29-32.17dc4be651058511", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-29-32", UID:"ip-172-31-29-32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-29-32"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 23, 30, 322564369, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 23, 30, 322564369, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-29-32"}': 'Post "https://172.31.29.32:6443/api/v1/namespaces/default/events": dial tcp 172.31.29.32:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:23:30.330850 kubelet[2550]: I0625 16:23:30.330825 2550 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:23:30.333084 kubelet[2550]: I0625 16:23:30.332038 2550 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:23:30.333084 kubelet[2550]: I0625 16:23:30.333008 2550 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:23:30.336900 kubelet[2550]: E0625 16:23:30.334231 2550 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:23:30.336900 kubelet[2550]: E0625 16:23:30.334276 2550 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:23:30.336900 kubelet[2550]: I0625 16:23:30.336334 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:23:30.338150 kubelet[2550]: I0625 16:23:30.338132 2550 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:23:30.348000 audit[2560]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.350522 kernel: kauditd_printk_skb: 27 callbacks suppressed Jun 25 16:23:30.350599 kernel: audit: type=1325 audit(1719332610.348:287): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.348000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdb5918570 a2=0 a3=7f9159d4de90 items=0 ppid=2550 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.353984 kubelet[2550]: I0625 16:23:30.353859 2550 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:23:30.357057 kubelet[2550]: I0625 16:23:30.357034 2550 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:23:30.357485 kubelet[2550]: I0625 16:23:30.357470 2550 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:23:30.357621 kernel: audit: type=1300 audit(1719332610.348:287): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdb5918570 a2=0 a3=7f9159d4de90 items=0 ppid=2550 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:23:30.360302 kubelet[2550]: W0625 16:23:30.360256 2550 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:30.360416 kubelet[2550]: E0625 16:23:30.360313 2550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:30.361262 kernel: audit: type=1327 audit(1719332610.348:287): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:23:30.361403 kubelet[2550]: E0625 16:23:30.360648 2550 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-32?timeout=10s\": dial tcp 172.31.29.32:6443: connect: connection refused" interval="200ms" Jun 25 16:23:30.378142 kernel: audit: type=1325 audit(1719332610.361:288): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.378293 kernel: audit: type=1300 audit(1719332610.361:288): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2264d5b0 a2=0 a3=7f6e9d722e90 items=0 ppid=2550 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.378339 kernel: audit: type=1327 audit(1719332610.361:288): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:23:30.361000 audit[2562]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.361000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2264d5b0 a2=0 a3=7f6e9d722e90 items=0 ppid=2550 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:23:30.387000 audit[2564]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.387000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb20ee710 a2=0 a3=7f7f33b24e90 items=0 ppid=2550 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.404926 kernel: audit: type=1325 audit(1719332610.387:289): table=filter:28 family=2 entries=2 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.405087 kernel: audit: type=1300 audit(1719332610.387:289): arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb20ee710 a2=0 a3=7f7f33b24e90 items=0 ppid=2550 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.387000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:23:30.414913 kernel: audit: type=1327 audit(1719332610.387:289): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:23:30.423000 audit[2568]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.423000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc3ad006f0 a2=0 a3=7f4587d31e90 items=0 ppid=2550 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.426086 kernel: audit: type=1325 audit(1719332610.423:290): table=filter:29 family=2 entries=2 op=nft_register_chain pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:23:30.459000 audit[2571]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.459000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff4bce4d70 a2=0 a3=7f30b006fe90 items=0 ppid=2550 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:23:30.461228 kubelet[2550]: I0625 16:23:30.461173 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:23:30.464000 audit[2574]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.464000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeefe62a90 a2=0 a3=7fe126eaae90 items=0 ppid=2550 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:23:30.467566 kubelet[2550]: I0625 16:23:30.467540 2550 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-32" Jun 25 16:23:30.468924 kubelet[2550]: E0625 16:23:30.468723 2550 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.32:6443/api/v1/nodes\": dial tcp 172.31.29.32:6443: connect: connection refused" node="ip-172-31-29-32" Jun 25 16:23:30.469403 kubelet[2550]: I0625 16:23:30.469384 2550 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:23:30.469403 kubelet[2550]: I0625 16:23:30.469404 2550 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:23:30.469747 kubelet[2550]: I0625 16:23:30.469422 2550 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:23:30.469000 audit[2575]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.469000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc47e95720 a2=0 a3=7f1ba51afe90 items=0 ppid=2550 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:23:30.470000 audit[2573]: NETFILTER_CFG table=mangle:33 family=10 entries=2 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:30.470000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff62d7a670 a2=0 a3=7fbd081aae90 items=0 ppid=2550 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:23:30.472291 kubelet[2550]: I0625 16:23:30.471485 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:23:30.472291 kubelet[2550]: I0625 16:23:30.471507 2550 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:23:30.472291 kubelet[2550]: I0625 16:23:30.471531 2550 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:23:30.472291 kubelet[2550]: E0625 16:23:30.471634 2550 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:23:30.472940 kubelet[2550]: I0625 16:23:30.472921 2550 policy_none.go:49] "None policy: Start" Jun 25 16:23:30.473371 kubelet[2550]: W0625 16:23:30.473157 2550 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:30.473371 kubelet[2550]: E0625 16:23:30.473206 2550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:30.474117 kubelet[2550]: I0625 16:23:30.474100 2550 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:23:30.474193 kubelet[2550]: I0625 16:23:30.474136 2550 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:23:30.473000 audit[2576]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:30.473000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb881dee0 a2=0 a3=7f6ab3f27e90 items=0 ppid=2550 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:23:30.474000 audit[2578]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:30.474000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffeb362290 a2=0 a3=7f2285e67e90 items=0 ppid=2550 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.474000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:23:30.476000 audit[2579]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:30.476000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd3d0a6ca0 a2=0 a3=7fe00bc8be90 items=0 ppid=2550 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:23:30.480000 audit[2580]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:30.480000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcc59190c0 a2=0 a3=7f211026ce90 items=0 ppid=2550 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:23:30.488611 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:23:30.500252 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:23:30.503748 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:23:30.514829 kubelet[2550]: I0625 16:23:30.514793 2550 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:23:30.515229 kubelet[2550]: I0625 16:23:30.515181 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:23:30.516563 kubelet[2550]: E0625 16:23:30.516357 2550 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-32\" not found" Jun 25 16:23:30.561424 kubelet[2550]: E0625 16:23:30.561331 2550 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-32?timeout=10s\": dial tcp 172.31.29.32:6443: connect: connection refused" interval="400ms" Jun 25 16:23:30.572700 kubelet[2550]: I0625 16:23:30.572666 2550 topology_manager.go:215] "Topology Admit Handler" podUID="719e29391ebf84c2ac52bf21b770c7e4" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-32" Jun 25 16:23:30.574217 kubelet[2550]: I0625 16:23:30.574196 2550 topology_manager.go:215] "Topology Admit Handler" podUID="cccd11b331768fd8fd796eb033608ff9" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:30.575735 kubelet[2550]: I0625 16:23:30.575708 2550 topology_manager.go:215] "Topology Admit Handler" podUID="de587db70178addac48911e7a00312a8" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-32" Jun 25 16:23:30.583381 systemd[1]: Created slice kubepods-burstable-pod719e29391ebf84c2ac52bf21b770c7e4.slice - libcontainer container kubepods-burstable-pod719e29391ebf84c2ac52bf21b770c7e4.slice. Jun 25 16:23:30.601431 systemd[1]: Created slice kubepods-burstable-podcccd11b331768fd8fd796eb033608ff9.slice - libcontainer container kubepods-burstable-podcccd11b331768fd8fd796eb033608ff9.slice. Jun 25 16:23:30.615468 systemd[1]: Created slice kubepods-burstable-podde587db70178addac48911e7a00312a8.slice - libcontainer container kubepods-burstable-podde587db70178addac48911e7a00312a8.slice. Jun 25 16:23:30.662074 kubelet[2550]: I0625 16:23:30.662027 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/719e29391ebf84c2ac52bf21b770c7e4-ca-certs\") pod \"kube-apiserver-ip-172-31-29-32\" (UID: \"719e29391ebf84c2ac52bf21b770c7e4\") " pod="kube-system/kube-apiserver-ip-172-31-29-32" Jun 25 16:23:30.662074 kubelet[2550]: I0625 16:23:30.662081 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/719e29391ebf84c2ac52bf21b770c7e4-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-32\" (UID: \"719e29391ebf84c2ac52bf21b770c7e4\") " pod="kube-system/kube-apiserver-ip-172-31-29-32" Jun 25 16:23:30.662712 kubelet[2550]: I0625 16:23:30.662111 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:30.662712 kubelet[2550]: I0625 16:23:30.662141 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:30.662712 kubelet[2550]: I0625 16:23:30.662171 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:30.662712 kubelet[2550]: I0625 16:23:30.662204 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/719e29391ebf84c2ac52bf21b770c7e4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-32\" (UID: \"719e29391ebf84c2ac52bf21b770c7e4\") " pod="kube-system/kube-apiserver-ip-172-31-29-32" Jun 25 16:23:30.662712 kubelet[2550]: I0625 16:23:30.662238 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:30.662858 kubelet[2550]: I0625 16:23:30.662266 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:30.662858 kubelet[2550]: I0625 16:23:30.662296 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de587db70178addac48911e7a00312a8-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-32\" (UID: \"de587db70178addac48911e7a00312a8\") " pod="kube-system/kube-scheduler-ip-172-31-29-32" Jun 25 16:23:30.676816 kubelet[2550]: I0625 16:23:30.676739 2550 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-32" Jun 25 16:23:30.677441 kubelet[2550]: E0625 16:23:30.677416 2550 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.32:6443/api/v1/nodes\": dial tcp 172.31.29.32:6443: connect: connection refused" node="ip-172-31-29-32" Jun 25 16:23:30.899537 containerd[1794]: time="2024-06-25T16:23:30.899490604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-32,Uid:719e29391ebf84c2ac52bf21b770c7e4,Namespace:kube-system,Attempt:0,}" Jun 25 16:23:30.918804 containerd[1794]: time="2024-06-25T16:23:30.918747553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-32,Uid:cccd11b331768fd8fd796eb033608ff9,Namespace:kube-system,Attempt:0,}" Jun 25 16:23:30.920352 containerd[1794]: time="2024-06-25T16:23:30.920308756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-32,Uid:de587db70178addac48911e7a00312a8,Namespace:kube-system,Attempt:0,}" Jun 25 16:23:30.963190 kubelet[2550]: E0625 16:23:30.962781 2550 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-32?timeout=10s\": dial tcp 172.31.29.32:6443: connect: connection refused" interval="800ms" Jun 25 16:23:31.079407 kubelet[2550]: I0625 16:23:31.079367 2550 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-32" Jun 25 16:23:31.079709 kubelet[2550]: E0625 16:23:31.079677 2550 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.32:6443/api/v1/nodes\": dial tcp 172.31.29.32:6443: connect: connection refused" node="ip-172-31-29-32" Jun 25 16:23:31.476403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968834608.mount: Deactivated successfully. Jun 25 16:23:31.498219 containerd[1794]: time="2024-06-25T16:23:31.498163106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.500946 containerd[1794]: time="2024-06-25T16:23:31.500873830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:23:31.503416 containerd[1794]: time="2024-06-25T16:23:31.503370789Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.505845 containerd[1794]: time="2024-06-25T16:23:31.505788934Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:23:31.508194 containerd[1794]: time="2024-06-25T16:23:31.508153711Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.512899 containerd[1794]: time="2024-06-25T16:23:31.512846747Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.515629 containerd[1794]: time="2024-06-25T16:23:31.515584899Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.518652 containerd[1794]: time="2024-06-25T16:23:31.518574763Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.521422 containerd[1794]: time="2024-06-25T16:23:31.521333048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:23:31.526984 containerd[1794]: time="2024-06-25T16:23:31.526934236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.529970 containerd[1794]: time="2024-06-25T16:23:31.529918489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.306493ms" Jun 25 16:23:31.532921 containerd[1794]: time="2024-06-25T16:23:31.532875878Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.536616 containerd[1794]: time="2024-06-25T16:23:31.534727386Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.539567 containerd[1794]: time="2024-06-25T16:23:31.539518134Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.543624 containerd[1794]: time="2024-06-25T16:23:31.543575814Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.544772 containerd[1794]: time="2024-06-25T16:23:31.544731413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 625.869115ms" Jun 25 16:23:31.548316 containerd[1794]: time="2024-06-25T16:23:31.548273058Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:31.549281 containerd[1794]: time="2024-06-25T16:23:31.549242883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 628.823351ms" Jun 25 16:23:31.772790 kubelet[2550]: E0625 16:23:31.765285 2550 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-32?timeout=10s\": dial tcp 172.31.29.32:6443: connect: connection refused" interval="1.6s" Jun 25 16:23:31.772790 kubelet[2550]: W0625 16:23:31.765369 2550 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:31.772790 kubelet[2550]: E0625 16:23:31.765426 2550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:31.772790 kubelet[2550]: W0625 16:23:31.765650 2550 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:31.772790 kubelet[2550]: E0625 16:23:31.765691 2550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:31.796153 kubelet[2550]: W0625 16:23:31.796074 2550 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-32&limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:31.796359 kubelet[2550]: E0625 16:23:31.796342 2550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-32&limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:31.860626 containerd[1794]: time="2024-06-25T16:23:31.860294349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:31.860626 containerd[1794]: time="2024-06-25T16:23:31.860375526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:31.860626 containerd[1794]: time="2024-06-25T16:23:31.860408165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:31.860626 containerd[1794]: time="2024-06-25T16:23:31.860445277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:31.865146 containerd[1794]: time="2024-06-25T16:23:31.864819364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:31.865146 containerd[1794]: time="2024-06-25T16:23:31.864894287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:31.865146 containerd[1794]: time="2024-06-25T16:23:31.864918237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:31.865146 containerd[1794]: time="2024-06-25T16:23:31.864933850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:31.879326 containerd[1794]: time="2024-06-25T16:23:31.879218389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:31.879492 containerd[1794]: time="2024-06-25T16:23:31.879348226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:31.879492 containerd[1794]: time="2024-06-25T16:23:31.879392944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:31.879492 containerd[1794]: time="2024-06-25T16:23:31.879430968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:31.882319 kubelet[2550]: I0625 16:23:31.882289 2550 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-32" Jun 25 16:23:31.882684 kubelet[2550]: E0625 16:23:31.882664 2550 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.32:6443/api/v1/nodes\": dial tcp 172.31.29.32:6443: connect: connection refused" node="ip-172-31-29-32" Jun 25 16:23:31.902104 systemd[1]: Started cri-containerd-559c1f6b6565479afe83e3b739abb332776be07834e3322e7afca6289ea0db0d.scope - libcontainer container 559c1f6b6565479afe83e3b739abb332776be07834e3322e7afca6289ea0db0d. Jun 25 16:23:31.914676 systemd[1]: Started cri-containerd-0679ec9881465b9cc8829a7e987653f85f2f93fd27d0bf6297830363e9a61144.scope - libcontainer container 0679ec9881465b9cc8829a7e987653f85f2f93fd27d0bf6297830363e9a61144. Jun 25 16:23:31.919560 kubelet[2550]: W0625 16:23:31.918066 2550 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:31.919560 kubelet[2550]: E0625 16:23:31.918140 2550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:31.940102 systemd[1]: Started cri-containerd-66dce11d1f34a4c6eae1cff8f47383f205eca1699ad7cc44de64bc3d5995a666.scope - libcontainer container 66dce11d1f34a4c6eae1cff8f47383f205eca1699ad7cc44de64bc3d5995a666. Jun 25 16:23:31.941000 audit: BPF prog-id=61 op=LOAD Jun 25 16:23:31.945000 audit: BPF prog-id=62 op=LOAD Jun 25 16:23:31.945000 audit: BPF prog-id=63 op=LOAD Jun 25 16:23:31.945000 audit[2635]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1988 a2=78 a3=0 items=0 ppid=2602 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:31.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535396331663662363536353437396166653833653362373339616262 Jun 25 16:23:31.946000 audit: BPF prog-id=64 op=LOAD Jun 25 16:23:31.947000 audit: BPF prog-id=65 op=LOAD Jun 25 16:23:31.947000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2605 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:31.946000 audit[2635]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b1720 a2=78 a3=0 items=0 ppid=2602 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:31.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535396331663662363536353437396166653833653362373339616262 Jun 25 16:23:31.947000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:23:31.947000 audit: BPF prog-id=63 op=UNLOAD Jun 25 16:23:31.947000 audit: BPF prog-id=66 op=LOAD Jun 25 16:23:31.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373965633938383134363562396363383832396137653938373635 Jun 25 16:23:31.948000 audit: BPF prog-id=67 op=LOAD Jun 25 16:23:31.948000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2605 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:31.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373965633938383134363562396363383832396137653938373635 Jun 25 16:23:31.948000 audit: BPF prog-id=67 op=UNLOAD Jun 25 16:23:31.947000 audit[2635]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1be0 a2=78 a3=0 items=0 ppid=2602 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:31.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535396331663662363536353437396166653833653362373339616262 Jun 25 16:23:31.948000 audit: BPF prog-id=65 op=UNLOAD Jun 25 16:23:31.949000 audit: BPF prog-id=68 op=LOAD Jun 25 16:23:31.949000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2605 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:31.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373965633938383134363562396363383832396137653938373635 Jun 25 16:23:31.967000 audit: BPF prog-id=69 op=LOAD Jun 25 16:23:31.967000 audit: BPF prog-id=70 op=LOAD Jun 25 16:23:31.967000 audit[2653]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2626 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:31.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646365313164316633346134633665616531636666386634373338 Jun 25 16:23:31.967000 audit: BPF prog-id=71 op=LOAD Jun 25 16:23:31.967000 audit[2653]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2626 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:31.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646365313164316633346134633665616531636666386634373338 Jun 25 16:23:31.967000 audit: BPF prog-id=71 op=UNLOAD Jun 25 16:23:31.967000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:23:31.967000 audit: BPF prog-id=72 op=LOAD Jun 25 16:23:31.967000 audit[2653]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2626 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:31.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646365313164316633346134633665616531636666386634373338 Jun 25 16:23:32.016785 containerd[1794]: time="2024-06-25T16:23:32.016726404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-32,Uid:cccd11b331768fd8fd796eb033608ff9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0679ec9881465b9cc8829a7e987653f85f2f93fd27d0bf6297830363e9a61144\"" Jun 25 16:23:32.024539 containerd[1794]: time="2024-06-25T16:23:32.024414822Z" level=info msg="CreateContainer within sandbox \"0679ec9881465b9cc8829a7e987653f85f2f93fd27d0bf6297830363e9a61144\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:23:32.027688 containerd[1794]: time="2024-06-25T16:23:32.027626528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-32,Uid:719e29391ebf84c2ac52bf21b770c7e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"66dce11d1f34a4c6eae1cff8f47383f205eca1699ad7cc44de64bc3d5995a666\"" Jun 25 16:23:32.036204 containerd[1794]: time="2024-06-25T16:23:32.036169308Z" level=info msg="CreateContainer within sandbox \"66dce11d1f34a4c6eae1cff8f47383f205eca1699ad7cc44de64bc3d5995a666\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:23:32.058045 containerd[1794]: time="2024-06-25T16:23:32.057986776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-32,Uid:de587db70178addac48911e7a00312a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"559c1f6b6565479afe83e3b739abb332776be07834e3322e7afca6289ea0db0d\"" Jun 25 16:23:32.067325 containerd[1794]: time="2024-06-25T16:23:32.067278645Z" level=info msg="CreateContainer within sandbox \"559c1f6b6565479afe83e3b739abb332776be07834e3322e7afca6289ea0db0d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:23:32.071343 containerd[1794]: time="2024-06-25T16:23:32.071288078Z" level=info msg="CreateContainer within sandbox \"0679ec9881465b9cc8829a7e987653f85f2f93fd27d0bf6297830363e9a61144\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e\"" Jun 25 16:23:32.074833 containerd[1794]: time="2024-06-25T16:23:32.074796154Z" level=info msg="StartContainer for \"eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e\"" Jun 25 16:23:32.089524 containerd[1794]: time="2024-06-25T16:23:32.089472168Z" level=info msg="CreateContainer within sandbox \"66dce11d1f34a4c6eae1cff8f47383f205eca1699ad7cc44de64bc3d5995a666\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"61f35840bc6ab3b887c4737c8e8584c0f09909a43d5dcb52c6c500d47485f1db\"" Jun 25 16:23:32.090595 containerd[1794]: time="2024-06-25T16:23:32.090555419Z" level=info msg="StartContainer for \"61f35840bc6ab3b887c4737c8e8584c0f09909a43d5dcb52c6c500d47485f1db\"" Jun 25 16:23:32.109083 systemd[1]: Started cri-containerd-eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e.scope - libcontainer container eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e. Jun 25 16:23:32.121666 containerd[1794]: time="2024-06-25T16:23:32.121612981Z" level=info msg="CreateContainer within sandbox \"559c1f6b6565479afe83e3b739abb332776be07834e3322e7afca6289ea0db0d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f\"" Jun 25 16:23:32.123255 containerd[1794]: time="2024-06-25T16:23:32.123218359Z" level=info msg="StartContainer for \"4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f\"" Jun 25 16:23:32.147000 audit: BPF prog-id=73 op=LOAD Jun 25 16:23:32.148000 audit: BPF prog-id=74 op=LOAD Jun 25 16:23:32.148000 audit[2721]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2605 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:32.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565626139366561336634336366363032323938383066373361636264 Jun 25 16:23:32.148000 audit: BPF prog-id=75 op=LOAD Jun 25 16:23:32.148000 audit[2721]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2605 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:32.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565626139366561336634336366363032323938383066373361636264 Jun 25 16:23:32.149000 audit: BPF prog-id=75 op=UNLOAD Jun 25 16:23:32.149000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:23:32.149000 audit: BPF prog-id=76 op=LOAD Jun 25 16:23:32.149000 audit[2721]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2605 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:32.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565626139366561336634336366363032323938383066373361636264 Jun 25 16:23:32.168100 systemd[1]: Started cri-containerd-61f35840bc6ab3b887c4737c8e8584c0f09909a43d5dcb52c6c500d47485f1db.scope - libcontainer container 61f35840bc6ab3b887c4737c8e8584c0f09909a43d5dcb52c6c500d47485f1db. Jun 25 16:23:32.180263 systemd[1]: Started cri-containerd-4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f.scope - libcontainer container 4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f. Jun 25 16:23:32.211000 audit: BPF prog-id=77 op=LOAD Jun 25 16:23:32.212000 audit: BPF prog-id=78 op=LOAD Jun 25 16:23:32.212000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2626 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:32.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631663335383430626336616233623838376334373337633865383538 Jun 25 16:23:32.212000 audit: BPF prog-id=79 op=LOAD Jun 25 16:23:32.212000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2626 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:32.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631663335383430626336616233623838376334373337633865383538 Jun 25 16:23:32.213000 audit: BPF prog-id=79 op=UNLOAD Jun 25 16:23:32.213000 audit: BPF prog-id=78 op=UNLOAD Jun 25 16:23:32.213000 audit: BPF prog-id=80 op=LOAD Jun 25 16:23:32.213000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2626 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:32.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631663335383430626336616233623838376334373337633865383538 Jun 25 16:23:32.217619 containerd[1794]: time="2024-06-25T16:23:32.217574780Z" level=info msg="StartContainer for \"eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e\" returns successfully" Jun 25 16:23:32.241000 audit: BPF prog-id=81 op=LOAD Jun 25 16:23:32.241000 audit: BPF prog-id=82 op=LOAD Jun 25 16:23:32.241000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2602 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:32.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431393762306338363332336634353732346233666263326566376134 Jun 25 16:23:32.241000 audit: BPF prog-id=83 op=LOAD Jun 25 16:23:32.241000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2602 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:32.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431393762306338363332336634353732346233666263326566376134 Jun 25 16:23:32.241000 audit: BPF prog-id=83 op=UNLOAD Jun 25 16:23:32.241000 audit: BPF prog-id=82 op=UNLOAD Jun 25 16:23:32.242000 audit: BPF prog-id=84 op=LOAD Jun 25 16:23:32.242000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2602 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:32.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431393762306338363332336634353732346233666263326566376134 Jun 25 16:23:32.272366 kubelet[2550]: E0625 16:23:32.272290 2550 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.32:6443: connect: connection refused Jun 25 16:23:32.288212 containerd[1794]: time="2024-06-25T16:23:32.288051236Z" level=info msg="StartContainer for \"61f35840bc6ab3b887c4737c8e8584c0f09909a43d5dcb52c6c500d47485f1db\" returns successfully" Jun 25 16:23:32.327618 containerd[1794]: time="2024-06-25T16:23:32.327544154Z" level=info msg="StartContainer for \"4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f\" returns successfully" Jun 25 16:23:33.165000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:33.165000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0005d2000 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:23:33.165000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:33.165000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:33.165000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000154400 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:23:33.165000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:33.366227 kubelet[2550]: E0625 16:23:33.366177 2550 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-32?timeout=10s\": dial tcp 172.31.29.32:6443: connect: connection refused" interval="3.2s" Jun 25 16:23:33.484708 kubelet[2550]: I0625 16:23:33.484678 2550 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-32" Jun 25 16:23:33.485058 kubelet[2550]: E0625 16:23:33.485039 2550 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.32:6443/api/v1/nodes\": dial tcp 172.31.29.32:6443: connect: connection refused" node="ip-172-31-29-32" Jun 25 16:23:35.458000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.461157 kernel: kauditd_printk_skb: 104 callbacks suppressed Jun 25 16:23:35.461264 kernel: audit: type=1400 audit(1719332615.458:337): avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.458000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00356b110 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:23:35.466075 kernel: audit: type=1300 audit(1719332615.458:337): arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00356b110 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:23:35.469042 kernel: audit: type=1327 audit(1719332615.458:337): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:23:35.458000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:23:35.471652 kernel: audit: type=1400 audit(1719332615.462:338): avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6307 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.462000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6307 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.462000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00356b230 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:23:35.479034 kernel: audit: type=1300 audit(1719332615.462:338): arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00356b230 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:23:35.479158 kernel: audit: type=1327 audit(1719332615.462:338): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:23:35.462000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:23:35.481824 kernel: audit: type=1400 audit(1719332615.472:339): avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.472000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.472000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=57 a1=c0072330c0 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:23:35.490958 kernel: audit: type=1300 audit(1719332615.472:339): arch=c000003e syscall=254 success=no exit=-13 a0=57 a1=c0072330c0 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:23:35.491075 kernel: audit: type=1327 audit(1719332615.472:339): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:23:35.472000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:23:35.493917 kernel: audit: type=1400 audit(1719332615.479:340): avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6322 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.479000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6322 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.479000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=58 a1=c006fef6b0 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:23:35.479000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:23:35.524000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.524000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c004604a40 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:23:35.524000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:23:35.524000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:35.524000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c0069b54d0 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:23:35.524000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:23:36.061628 kubelet[2550]: E0625 16:23:36.061585 2550 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-29-32" not found Jun 25 16:23:36.311201 kubelet[2550]: I0625 16:23:36.311147 2550 apiserver.go:52] "Watching apiserver" Jun 25 16:23:36.358051 kubelet[2550]: I0625 16:23:36.357929 2550 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:23:36.423256 kubelet[2550]: E0625 16:23:36.423214 2550 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-29-32" not found Jun 25 16:23:36.570581 kubelet[2550]: E0625 16:23:36.570547 2550 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-32\" not found" node="ip-172-31-29-32" Jun 25 16:23:36.688502 kubelet[2550]: I0625 16:23:36.688411 2550 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-32" Jun 25 16:23:36.702724 kubelet[2550]: I0625 16:23:36.702690 2550 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-32" Jun 25 16:23:38.715965 systemd[1]: Reloading. Jun 25 16:23:38.955762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:23:39.109000 audit: BPF prog-id=85 op=LOAD Jun 25 16:23:39.110000 audit: BPF prog-id=69 op=UNLOAD Jun 25 16:23:39.114000 audit: BPF prog-id=86 op=LOAD Jun 25 16:23:39.114000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:23:39.114000 audit: BPF prog-id=87 op=LOAD Jun 25 16:23:39.114000 audit: BPF prog-id=88 op=LOAD Jun 25 16:23:39.114000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:23:39.114000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:23:39.115000 audit: BPF prog-id=89 op=LOAD Jun 25 16:23:39.115000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:23:39.115000 audit: BPF prog-id=90 op=LOAD Jun 25 16:23:39.115000 audit: BPF prog-id=91 op=LOAD Jun 25 16:23:39.115000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:23:39.115000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:23:39.118000 audit: BPF prog-id=92 op=LOAD Jun 25 16:23:39.118000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:23:39.119000 audit: BPF prog-id=93 op=LOAD Jun 25 16:23:39.119000 audit: BPF prog-id=61 op=UNLOAD Jun 25 16:23:39.120000 audit: BPF prog-id=94 op=LOAD Jun 25 16:23:39.120000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:23:39.121000 audit: BPF prog-id=95 op=LOAD Jun 25 16:23:39.121000 audit: BPF prog-id=81 op=UNLOAD Jun 25 16:23:39.122000 audit: BPF prog-id=96 op=LOAD Jun 25 16:23:39.122000 audit: BPF prog-id=73 op=UNLOAD Jun 25 16:23:39.123000 audit: BPF prog-id=97 op=LOAD Jun 25 16:23:39.123000 audit: BPF prog-id=55 op=UNLOAD Jun 25 16:23:39.123000 audit: BPF prog-id=98 op=LOAD Jun 25 16:23:39.123000 audit: BPF prog-id=99 op=LOAD Jun 25 16:23:39.123000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:23:39.123000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:23:39.124000 audit: BPF prog-id=100 op=LOAD Jun 25 16:23:39.124000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:23:39.125000 audit: BPF prog-id=101 op=LOAD Jun 25 16:23:39.125000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:23:39.128000 audit: BPF prog-id=102 op=LOAD Jun 25 16:23:39.128000 audit: BPF prog-id=103 op=LOAD Jun 25 16:23:39.128000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:23:39.128000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:23:39.129000 audit: BPF prog-id=104 op=LOAD Jun 25 16:23:39.129000 audit: BPF prog-id=77 op=UNLOAD Jun 25 16:23:39.151305 kubelet[2550]: I0625 16:23:39.149915 2550 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:23:39.150143 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:39.171320 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:23:39.171602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:39.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:39.171681 systemd[1]: kubelet.service: Consumed 1.008s CPU time. Jun 25 16:23:39.178697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:39.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:39.460620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:39.579971 kubelet[2901]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:23:39.580383 kubelet[2901]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:23:39.580427 kubelet[2901]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:23:39.580568 kubelet[2901]: I0625 16:23:39.580540 2901 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:23:39.588664 kubelet[2901]: I0625 16:23:39.588055 2901 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:23:39.588664 kubelet[2901]: I0625 16:23:39.588085 2901 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:23:39.588664 kubelet[2901]: I0625 16:23:39.588349 2901 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:23:39.596007 kubelet[2901]: I0625 16:23:39.594343 2901 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:23:39.598466 kubelet[2901]: I0625 16:23:39.597142 2901 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:23:39.613656 kubelet[2901]: I0625 16:23:39.612947 2901 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:23:39.613656 kubelet[2901]: I0625 16:23:39.613253 2901 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:23:39.613656 kubelet[2901]: I0625 16:23:39.613487 2901 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:23:39.613656 kubelet[2901]: I0625 16:23:39.613519 2901 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:23:39.613656 kubelet[2901]: I0625 16:23:39.613532 2901 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:23:39.613656 kubelet[2901]: I0625 16:23:39.613578 2901 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:23:39.614090 kubelet[2901]: I0625 16:23:39.613719 2901 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:23:39.614090 kubelet[2901]: I0625 16:23:39.613736 2901 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:23:39.615980 kubelet[2901]: I0625 16:23:39.615957 2901 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:23:39.620128 kubelet[2901]: I0625 16:23:39.620086 2901 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:23:39.631765 kubelet[2901]: I0625 16:23:39.631737 2901 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:23:39.633320 kubelet[2901]: I0625 16:23:39.633295 2901 server.go:1232] "Started kubelet" Jun 25 16:23:39.641885 kubelet[2901]: I0625 16:23:39.641836 2901 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:23:39.650808 update_engine[1785]: I0625 16:23:39.650763 1785 update_attempter.cc:509] Updating boot flags... Jun 25 16:23:39.652225 kubelet[2901]: E0625 16:23:39.651122 2901 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:23:39.652225 kubelet[2901]: E0625 16:23:39.651154 2901 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:23:39.658486 kubelet[2901]: I0625 16:23:39.658453 2901 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:23:39.661397 kubelet[2901]: I0625 16:23:39.661372 2901 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:23:39.717959 kubelet[2901]: I0625 16:23:39.717008 2901 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:23:39.720915 kubelet[2901]: I0625 16:23:39.663561 2901 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:23:39.721146 kubelet[2901]: I0625 16:23:39.721125 2901 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:23:39.721217 kubelet[2901]: I0625 16:23:39.666336 2901 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:23:39.722427 kubelet[2901]: I0625 16:23:39.666658 2901 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:23:39.767853 kubelet[2901]: I0625 16:23:39.767675 2901 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-32" Jun 25 16:23:39.776388 kubelet[2901]: I0625 16:23:39.776361 2901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:23:39.779271 kubelet[2901]: I0625 16:23:39.779231 2901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:23:39.779656 kubelet[2901]: I0625 16:23:39.779637 2901 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:23:39.779984 kubelet[2901]: I0625 16:23:39.779966 2901 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:23:39.780413 kubelet[2901]: E0625 16:23:39.780398 2901 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:23:39.826688 kubelet[2901]: I0625 16:23:39.826648 2901 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-29-32" Jun 25 16:23:39.826839 kubelet[2901]: I0625 16:23:39.826759 2901 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-32" Jun 25 16:23:39.881897 kubelet[2901]: E0625 16:23:39.881639 2901 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:23:39.901921 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2949) Jun 25 16:23:40.018919 kubelet[2901]: I0625 16:23:40.017724 2901 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:23:40.023441 kubelet[2901]: I0625 16:23:40.019380 2901 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:23:40.023441 kubelet[2901]: I0625 16:23:40.019421 2901 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:23:40.023441 kubelet[2901]: I0625 16:23:40.019635 2901 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:23:40.023441 kubelet[2901]: I0625 16:23:40.019664 2901 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:23:40.023441 kubelet[2901]: I0625 16:23:40.019675 2901 policy_none.go:49] "None policy: Start" Jun 25 16:23:40.059467 kubelet[2901]: I0625 16:23:40.058199 2901 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:23:40.059467 kubelet[2901]: I0625 16:23:40.058238 2901 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:23:40.059467 kubelet[2901]: I0625 16:23:40.058472 2901 state_mem.go:75] "Updated machine memory state" Jun 25 16:23:40.069655 kubelet[2901]: I0625 16:23:40.069631 2901 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:23:40.071645 kubelet[2901]: I0625 16:23:40.071621 2901 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:23:40.094945 kubelet[2901]: I0625 16:23:40.094913 2901 topology_manager.go:215] "Topology Admit Handler" podUID="719e29391ebf84c2ac52bf21b770c7e4" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-32" Jun 25 16:23:40.095786 kubelet[2901]: I0625 16:23:40.095231 2901 topology_manager.go:215] "Topology Admit Handler" podUID="cccd11b331768fd8fd796eb033608ff9" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:40.095786 kubelet[2901]: I0625 16:23:40.095318 2901 topology_manager.go:215] "Topology Admit Handler" podUID="de587db70178addac48911e7a00312a8" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-32" Jun 25 16:23:40.126133 kubelet[2901]: I0625 16:23:40.125363 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:40.126133 kubelet[2901]: I0625 16:23:40.125447 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:40.126133 kubelet[2901]: I0625 16:23:40.125484 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de587db70178addac48911e7a00312a8-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-32\" (UID: \"de587db70178addac48911e7a00312a8\") " pod="kube-system/kube-scheduler-ip-172-31-29-32" Jun 25 16:23:40.126133 kubelet[2901]: I0625 16:23:40.125761 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/719e29391ebf84c2ac52bf21b770c7e4-ca-certs\") pod \"kube-apiserver-ip-172-31-29-32\" (UID: \"719e29391ebf84c2ac52bf21b770c7e4\") " pod="kube-system/kube-apiserver-ip-172-31-29-32" Jun 25 16:23:40.126133 kubelet[2901]: I0625 16:23:40.125818 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/719e29391ebf84c2ac52bf21b770c7e4-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-32\" (UID: \"719e29391ebf84c2ac52bf21b770c7e4\") " pod="kube-system/kube-apiserver-ip-172-31-29-32" Jun 25 16:23:40.126611 kubelet[2901]: I0625 16:23:40.125936 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/719e29391ebf84c2ac52bf21b770c7e4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-32\" (UID: \"719e29391ebf84c2ac52bf21b770c7e4\") " pod="kube-system/kube-apiserver-ip-172-31-29-32" Jun 25 16:23:40.126611 kubelet[2901]: I0625 16:23:40.125983 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:40.126611 kubelet[2901]: I0625 16:23:40.126023 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:40.126611 kubelet[2901]: I0625 16:23:40.126344 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cccd11b331768fd8fd796eb033608ff9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-32\" (UID: \"cccd11b331768fd8fd796eb033608ff9\") " pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:40.132912 kubelet[2901]: E0625 16:23:40.126845 2901 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-32\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-32" Jun 25 16:23:40.444499 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2953) Jun 25 16:23:40.628070 kubelet[2901]: I0625 16:23:40.627980 2901 apiserver.go:52] "Watching apiserver" Jun 25 16:23:40.728933 kubelet[2901]: I0625 16:23:40.723508 2901 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:23:40.947229 kubelet[2901]: I0625 16:23:40.947097 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-32" podStartSLOduration=0.947040637 podCreationTimestamp="2024-06-25 16:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:40.936439038 +0000 UTC m=+1.467475380" watchObservedRunningTime="2024-06-25 16:23:40.947040637 +0000 UTC m=+1.478076978" Jun 25 16:23:40.957655 kubelet[2901]: I0625 16:23:40.957602 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-32" podStartSLOduration=0.957555119 podCreationTimestamp="2024-06-25 16:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:40.956773678 +0000 UTC m=+1.487810020" watchObservedRunningTime="2024-06-25 16:23:40.957555119 +0000 UTC m=+1.488591455" Jun 25 16:23:40.957836 kubelet[2901]: I0625 16:23:40.957798 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-32" podStartSLOduration=2.957755203 podCreationTimestamp="2024-06-25 16:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:40.947900378 +0000 UTC m=+1.478936720" watchObservedRunningTime="2024-06-25 16:23:40.957755203 +0000 UTC m=+1.488791546" Jun 25 16:23:46.215991 sudo[2088]: pam_unix(sudo:session): session closed for user root Jun 25 16:23:46.215000 audit[2088]: USER_END pid=2088 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:46.220096 kernel: kauditd_printk_skb: 50 callbacks suppressed Jun 25 16:23:46.220221 kernel: audit: type=1106 audit(1719332626.215:385): pid=2088 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:46.217000 audit[2088]: CRED_DISP pid=2088 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:46.224920 kernel: audit: type=1104 audit(1719332626.217:386): pid=2088 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:46.245481 sshd[2085]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:46.246000 audit[2085]: USER_END pid=2085 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:46.248000 audit[2085]: CRED_DISP pid=2085 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:46.254242 kernel: audit: type=1106 audit(1719332626.246:387): pid=2085 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:46.254347 kernel: audit: type=1104 audit(1719332626.248:388): pid=2085 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:23:46.254392 kernel: audit: type=1131 audit(1719332626.251:389): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.32:22-139.178.89.65:33970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:46.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.32:22-139.178.89.65:33970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:46.251853 systemd[1]: sshd@6-172.31.29.32:22-139.178.89.65:33970.service: Deactivated successfully. Jun 25 16:23:46.252910 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:23:46.253119 systemd[1]: session-7.scope: Consumed 5.505s CPU time. Jun 25 16:23:46.255980 systemd-logind[1784]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:23:46.257218 systemd-logind[1784]: Removed session 7. Jun 25 16:23:48.785000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:48.792973 kernel: audit: type=1400 audit(1719332628.785:390): avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:48.793092 kernel: audit: type=1300 audit(1719332628.785:390): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000f98400 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:23:48.785000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000f98400 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:23:48.802224 kernel: audit: type=1327 audit(1719332628.785:390): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:48.785000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:48.785000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:48.822893 kernel: audit: type=1400 audit(1719332628.785:391): avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:48.785000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f985c0 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:23:48.785000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:48.788000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:48.827948 kernel: audit: type=1300 audit(1719332628.785:391): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f985c0 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:23:48.788000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f98780 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:23:48.788000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:48.792000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:48.792000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000f98940 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:23:48.792000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:50.550000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="nvme0n1p9" ino=6347 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:23:50.550000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0000ea9c0 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:23:50.550000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:52.863634 kubelet[2901]: I0625 16:23:52.863609 2901 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:23:52.864812 containerd[1794]: time="2024-06-25T16:23:52.864752417Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:23:52.865894 kubelet[2901]: I0625 16:23:52.865855 2901 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:23:53.629761 kubelet[2901]: I0625 16:23:53.629721 2901 topology_manager.go:215] "Topology Admit Handler" podUID="c0a08a2a-7a36-48b9-b8af-d3d934aded03" podNamespace="kube-system" podName="kube-proxy-sx52h" Jun 25 16:23:53.641197 systemd[1]: Created slice kubepods-besteffort-podc0a08a2a_7a36_48b9_b8af_d3d934aded03.slice - libcontainer container kubepods-besteffort-podc0a08a2a_7a36_48b9_b8af_d3d934aded03.slice. Jun 25 16:23:53.754912 kubelet[2901]: I0625 16:23:53.754877 2901 topology_manager.go:215] "Topology Admit Handler" podUID="0710893f-fd20-43ff-a569-098536bca642" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-2jbms" Jun 25 16:23:53.759572 kubelet[2901]: I0625 16:23:53.759550 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0a08a2a-7a36-48b9-b8af-d3d934aded03-kube-proxy\") pod \"kube-proxy-sx52h\" (UID: \"c0a08a2a-7a36-48b9-b8af-d3d934aded03\") " pod="kube-system/kube-proxy-sx52h" Jun 25 16:23:53.759797 kubelet[2901]: I0625 16:23:53.759787 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0a08a2a-7a36-48b9-b8af-d3d934aded03-lib-modules\") pod \"kube-proxy-sx52h\" (UID: \"c0a08a2a-7a36-48b9-b8af-d3d934aded03\") " pod="kube-system/kube-proxy-sx52h" Jun 25 16:23:53.760021 kubelet[2901]: I0625 16:23:53.760008 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0a08a2a-7a36-48b9-b8af-d3d934aded03-xtables-lock\") pod \"kube-proxy-sx52h\" (UID: \"c0a08a2a-7a36-48b9-b8af-d3d934aded03\") " pod="kube-system/kube-proxy-sx52h" Jun 25 16:23:53.760153 kubelet[2901]: I0625 16:23:53.760143 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crplt\" (UniqueName: \"kubernetes.io/projected/c0a08a2a-7a36-48b9-b8af-d3d934aded03-kube-api-access-crplt\") pod \"kube-proxy-sx52h\" (UID: \"c0a08a2a-7a36-48b9-b8af-d3d934aded03\") " pod="kube-system/kube-proxy-sx52h" Jun 25 16:23:53.761686 systemd[1]: Created slice kubepods-besteffort-pod0710893f_fd20_43ff_a569_098536bca642.slice - libcontainer container kubepods-besteffort-pod0710893f_fd20_43ff_a569_098536bca642.slice. Jun 25 16:23:53.861075 kubelet[2901]: I0625 16:23:53.861037 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tnxd\" (UniqueName: \"kubernetes.io/projected/0710893f-fd20-43ff-a569-098536bca642-kube-api-access-7tnxd\") pod \"tigera-operator-76c4974c85-2jbms\" (UID: \"0710893f-fd20-43ff-a569-098536bca642\") " pod="tigera-operator/tigera-operator-76c4974c85-2jbms" Jun 25 16:23:53.861240 kubelet[2901]: I0625 16:23:53.861123 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0710893f-fd20-43ff-a569-098536bca642-var-lib-calico\") pod \"tigera-operator-76c4974c85-2jbms\" (UID: \"0710893f-fd20-43ff-a569-098536bca642\") " pod="tigera-operator/tigera-operator-76c4974c85-2jbms" Jun 25 16:23:53.952044 containerd[1794]: time="2024-06-25T16:23:53.951435915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sx52h,Uid:c0a08a2a-7a36-48b9-b8af-d3d934aded03,Namespace:kube-system,Attempt:0,}" Jun 25 16:23:54.024621 containerd[1794]: time="2024-06-25T16:23:54.024520556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:54.024838 containerd[1794]: time="2024-06-25T16:23:54.024812556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:54.025198 containerd[1794]: time="2024-06-25T16:23:54.025158745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:54.025345 containerd[1794]: time="2024-06-25T16:23:54.025323322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:54.059261 systemd[1]: Started cri-containerd-3027236bc6a42478bd58c131a96012948945429fcfb351c53c9a5805ebf1243b.scope - libcontainer container 3027236bc6a42478bd58c131a96012948945429fcfb351c53c9a5805ebf1243b. Jun 25 16:23:54.079802 containerd[1794]: time="2024-06-25T16:23:54.076333356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-2jbms,Uid:0710893f-fd20-43ff-a569-098536bca642,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:23:54.113000 audit: BPF prog-id=105 op=LOAD Jun 25 16:23:54.116269 kernel: kauditd_printk_skb: 10 callbacks suppressed Jun 25 16:23:54.116364 kernel: audit: type=1334 audit(1719332634.113:395): prog-id=105 op=LOAD Jun 25 16:23:54.116000 audit: BPF prog-id=106 op=LOAD Jun 25 16:23:54.116000 audit[3177]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3167 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.124729 kernel: audit: type=1334 audit(1719332634.116:396): prog-id=106 op=LOAD Jun 25 16:23:54.124848 kernel: audit: type=1300 audit(1719332634.116:396): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3167 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.124922 kernel: audit: type=1327 audit(1719332634.116:396): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323732333662633661343234373862643538633133316139363031 Jun 25 16:23:54.124957 kernel: audit: type=1334 audit(1719332634.116:397): prog-id=107 op=LOAD Jun 25 16:23:54.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323732333662633661343234373862643538633133316139363031 Jun 25 16:23:54.116000 audit: BPF prog-id=107 op=LOAD Jun 25 16:23:54.116000 audit[3177]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3167 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.131910 kernel: audit: type=1300 audit(1719332634.116:397): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3167 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.132061 kernel: audit: type=1327 audit(1719332634.116:397): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323732333662633661343234373862643538633133316139363031 Jun 25 16:23:54.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323732333662633661343234373862643538633133316139363031 Jun 25 16:23:54.134952 kernel: audit: type=1334 audit(1719332634.116:398): prog-id=107 op=UNLOAD Jun 25 16:23:54.116000 audit: BPF prog-id=107 op=UNLOAD Jun 25 16:23:54.136425 kernel: audit: type=1334 audit(1719332634.116:399): prog-id=106 op=UNLOAD Jun 25 16:23:54.116000 audit: BPF prog-id=106 op=UNLOAD Jun 25 16:23:54.117000 audit: BPF prog-id=108 op=LOAD Jun 25 16:23:54.117000 audit[3177]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3167 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323732333662633661343234373862643538633133316139363031 Jun 25 16:23:54.138932 kernel: audit: type=1334 audit(1719332634.117:400): prog-id=108 op=LOAD Jun 25 16:23:54.156436 containerd[1794]: time="2024-06-25T16:23:54.156387340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sx52h,Uid:c0a08a2a-7a36-48b9-b8af-d3d934aded03,Namespace:kube-system,Attempt:0,} returns sandbox id \"3027236bc6a42478bd58c131a96012948945429fcfb351c53c9a5805ebf1243b\"" Jun 25 16:23:54.160958 containerd[1794]: time="2024-06-25T16:23:54.160820586Z" level=info msg="CreateContainer within sandbox \"3027236bc6a42478bd58c131a96012948945429fcfb351c53c9a5805ebf1243b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:23:54.166320 containerd[1794]: time="2024-06-25T16:23:54.166218957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:54.166320 containerd[1794]: time="2024-06-25T16:23:54.166292657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:54.166551 containerd[1794]: time="2024-06-25T16:23:54.166517088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:54.166668 containerd[1794]: time="2024-06-25T16:23:54.166544166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:54.204088 systemd[1]: Started cri-containerd-bf7c30da5952497f683b2add2c378f9f8a8c4be6cbc77b3abce04a4b7222fae4.scope - libcontainer container bf7c30da5952497f683b2add2c378f9f8a8c4be6cbc77b3abce04a4b7222fae4. Jun 25 16:23:54.207897 containerd[1794]: time="2024-06-25T16:23:54.207821353Z" level=info msg="CreateContainer within sandbox \"3027236bc6a42478bd58c131a96012948945429fcfb351c53c9a5805ebf1243b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03ecdb68e392c8a3200618ce237f072d98334fae834092ef2ab2df32a9357d52\"" Jun 25 16:23:54.212440 containerd[1794]: time="2024-06-25T16:23:54.212345177Z" level=info msg="StartContainer for \"03ecdb68e392c8a3200618ce237f072d98334fae834092ef2ab2df32a9357d52\"" Jun 25 16:23:54.224000 audit: BPF prog-id=109 op=LOAD Jun 25 16:23:54.224000 audit: BPF prog-id=110 op=LOAD Jun 25 16:23:54.224000 audit[3219]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3202 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266376333306461353935323439376636383362326164643263333738 Jun 25 16:23:54.224000 audit: BPF prog-id=111 op=LOAD Jun 25 16:23:54.224000 audit[3219]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3202 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266376333306461353935323439376636383362326164643263333738 Jun 25 16:23:54.224000 audit: BPF prog-id=111 op=UNLOAD Jun 25 16:23:54.225000 audit: BPF prog-id=110 op=UNLOAD Jun 25 16:23:54.225000 audit: BPF prog-id=112 op=LOAD Jun 25 16:23:54.225000 audit[3219]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3202 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266376333306461353935323439376636383362326164643263333738 Jun 25 16:23:54.261080 systemd[1]: Started cri-containerd-03ecdb68e392c8a3200618ce237f072d98334fae834092ef2ab2df32a9357d52.scope - libcontainer container 03ecdb68e392c8a3200618ce237f072d98334fae834092ef2ab2df32a9357d52. Jun 25 16:23:54.295108 containerd[1794]: time="2024-06-25T16:23:54.295032008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-2jbms,Uid:0710893f-fd20-43ff-a569-098536bca642,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bf7c30da5952497f683b2add2c378f9f8a8c4be6cbc77b3abce04a4b7222fae4\"" Jun 25 16:23:54.299000 audit: BPF prog-id=113 op=LOAD Jun 25 16:23:54.299000 audit[3243]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3167 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656364623638653339326338613332303036313863653233376630 Jun 25 16:23:54.299000 audit: BPF prog-id=114 op=LOAD Jun 25 16:23:54.299000 audit[3243]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3167 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656364623638653339326338613332303036313863653233376630 Jun 25 16:23:54.299000 audit: BPF prog-id=114 op=UNLOAD Jun 25 16:23:54.300000 audit: BPF prog-id=113 op=UNLOAD Jun 25 16:23:54.300000 audit: BPF prog-id=115 op=LOAD Jun 25 16:23:54.300000 audit[3243]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3167 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656364623638653339326338613332303036313863653233376630 Jun 25 16:23:54.303353 containerd[1794]: time="2024-06-25T16:23:54.303312163Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:23:54.324023 containerd[1794]: time="2024-06-25T16:23:54.323972906Z" level=info msg="StartContainer for \"03ecdb68e392c8a3200618ce237f072d98334fae834092ef2ab2df32a9357d52\" returns successfully" Jun 25 16:23:54.881708 systemd[1]: run-containerd-runc-k8s.io-3027236bc6a42478bd58c131a96012948945429fcfb351c53c9a5805ebf1243b-runc.J1a9N8.mount: Deactivated successfully. Jun 25 16:23:54.934000 audit[3301]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=3301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:54.934000 audit[3301]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee15bb820 a2=0 a3=7ffee15bb80c items=0 ppid=3254 pid=3301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:23:54.937000 audit[3302]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=3302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:54.937000 audit[3302]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee83d0d00 a2=0 a3=7ffee83d0cec items=0 ppid=3254 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:23:54.939000 audit[3303]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=3303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:54.939000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1b3b4130 a2=0 a3=7ffd1b3b411c items=0 ppid=3254 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:23:54.947000 audit[3304]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:54.947000 audit[3304]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde9d469c0 a2=0 a3=7ffde9d469ac items=0 ppid=3254 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.947000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:23:54.965000 audit[3305]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:54.965000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff02960170 a2=0 a3=7fff0296015c items=0 ppid=3254 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:23:54.971000 audit[3306]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:54.971000 audit[3306]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6cbfa010 a2=0 a3=7ffc6cbf9ffc items=0 ppid=3254 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:54.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:23:55.128000 audit[3307]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.128000 audit[3307]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe6649c570 a2=0 a3=7ffe6649c55c items=0 ppid=3254 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:23:55.212000 audit[3309]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.212000 audit[3309]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffff4889d10 a2=0 a3=7ffff4889cfc items=0 ppid=3254 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.212000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:23:55.224000 audit[3312]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.224000 audit[3312]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff29c29260 a2=0 a3=7fff29c2924c items=0 ppid=3254 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:23:55.226000 audit[3313]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.226000 audit[3313]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa9d191e0 a2=0 a3=7fffa9d191cc items=0 ppid=3254 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:23:55.231000 audit[3315]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.231000 audit[3315]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0e0a7be0 a2=0 a3=7ffd0e0a7bcc items=0 ppid=3254 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:23:55.233000 audit[3316]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.233000 audit[3316]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc14e84ba0 a2=0 a3=7ffc14e84b8c items=0 ppid=3254 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.233000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:23:55.238000 audit[3318]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.238000 audit[3318]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb075a980 a2=0 a3=7fffb075a96c items=0 ppid=3254 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:23:55.245000 audit[3321]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.245000 audit[3321]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd6ec5e050 a2=0 a3=7ffd6ec5e03c items=0 ppid=3254 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:23:55.247000 audit[3322]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.247000 audit[3322]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3ccdf080 a2=0 a3=7ffe3ccdf06c items=0 ppid=3254 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:23:55.253000 audit[3324]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.253000 audit[3324]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc01d7f090 a2=0 a3=7ffc01d7f07c items=0 ppid=3254 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:23:55.256000 audit[3325]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.256000 audit[3325]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd28b010c0 a2=0 a3=7ffd28b010ac items=0 ppid=3254 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:23:55.261000 audit[3327]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.261000 audit[3327]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4aa7def0 a2=0 a3=7fff4aa7dedc items=0 ppid=3254 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:23:55.269000 audit[3330]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.269000 audit[3330]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5f925970 a2=0 a3=7ffc5f92595c items=0 ppid=3254 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:23:55.275000 audit[3333]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.275000 audit[3333]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6e147b70 a2=0 a3=7ffe6e147b5c items=0 ppid=3254 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:23:55.277000 audit[3334]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.277000 audit[3334]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0a19f020 a2=0 a3=7fff0a19f00c items=0 ppid=3254 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:23:55.286000 audit[3336]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.286000 audit[3336]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc6baa2120 a2=0 a3=7ffc6baa210c items=0 ppid=3254 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:23:55.295000 audit[3339]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.295000 audit[3339]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffce6239d90 a2=0 a3=7ffce6239d7c items=0 ppid=3254 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:23:55.297000 audit[3340]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.297000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdc6b1c90 a2=0 a3=7fffdc6b1c7c items=0 ppid=3254 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:23:55.306000 audit[3342]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:55.306000 audit[3342]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff521ce4d0 a2=0 a3=7fff521ce4bc items=0 ppid=3254 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:23:55.341000 audit[3348]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:55.341000 audit[3348]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcef3b42b0 a2=0 a3=7ffcef3b429c items=0 ppid=3254 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:55.348000 audit[3348]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:55.348000 audit[3348]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcef3b42b0 a2=0 a3=7ffcef3b429c items=0 ppid=3254 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:55.352000 audit[3354]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.352000 audit[3354]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff10e16e10 a2=0 a3=7fff10e16dfc items=0 ppid=3254 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:23:55.364000 audit[3356]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.364000 audit[3356]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff62bc37a0 a2=0 a3=7fff62bc378c items=0 ppid=3254 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:23:55.380000 audit[3359]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.380000 audit[3359]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff4c8af6c0 a2=0 a3=7fff4c8af6ac items=0 ppid=3254 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:23:55.383000 audit[3360]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.383000 audit[3360]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5ad90380 a2=0 a3=7ffc5ad9036c items=0 ppid=3254 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:23:55.395000 audit[3362]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.395000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffccb8230c0 a2=0 a3=7ffccb8230ac items=0 ppid=3254 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:23:55.398000 audit[3363]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.398000 audit[3363]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5f020260 a2=0 a3=7ffd5f02024c items=0 ppid=3254 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.398000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:23:55.405000 audit[3365]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.405000 audit[3365]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd1e604290 a2=0 a3=7ffd1e60427c items=0 ppid=3254 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:23:55.412000 audit[3368]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.412000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffd9366990 a2=0 a3=7fffd936697c items=0 ppid=3254 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:23:55.414000 audit[3369]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.414000 audit[3369]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbb656ca0 a2=0 a3=7fffbb656c8c items=0 ppid=3254 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:23:55.419000 audit[3371]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.419000 audit[3371]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9dcb5980 a2=0 a3=7ffe9dcb596c items=0 ppid=3254 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:23:55.421000 audit[3372]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.421000 audit[3372]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc34f45390 a2=0 a3=7ffc34f4537c items=0 ppid=3254 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:23:55.426000 audit[3374]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.426000 audit[3374]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd214a3030 a2=0 a3=7ffd214a301c items=0 ppid=3254 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:23:55.433000 audit[3377]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.433000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd667a49a0 a2=0 a3=7ffd667a498c items=0 ppid=3254 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:23:55.441000 audit[3380]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.441000 audit[3380]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7ec91f90 a2=0 a3=7ffe7ec91f7c items=0 ppid=3254 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:23:55.444000 audit[3381]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.444000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef145ae90 a2=0 a3=7ffef145ae7c items=0 ppid=3254 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:23:55.448000 audit[3383]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.448000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffff0f5bab0 a2=0 a3=7ffff0f5ba9c items=0 ppid=3254 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:23:55.458000 audit[3386]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.458000 audit[3386]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffea05ea7b0 a2=0 a3=7ffea05ea79c items=0 ppid=3254 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:23:55.460000 audit[3387]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.460000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb0704ca0 a2=0 a3=7ffdb0704c8c items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:23:55.464000 audit[3389]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.464000 audit[3389]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff52bfff40 a2=0 a3=7fff52bfff2c items=0 ppid=3254 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:23:55.467000 audit[3390]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.467000 audit[3390]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff26f1a870 a2=0 a3=7fff26f1a85c items=0 ppid=3254 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:23:55.472000 audit[3392]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.472000 audit[3392]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd86c821c0 a2=0 a3=7ffd86c821ac items=0 ppid=3254 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:23:55.480000 audit[3395]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:55.480000 audit[3395]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf34f8530 a2=0 a3=7ffcf34f851c items=0 ppid=3254 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:23:55.485000 audit[3397]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:23:55.485000 audit[3397]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffc49fcb560 a2=0 a3=7ffc49fcb54c items=0 ppid=3254 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.485000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:55.486000 audit[3397]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:23:55.486000 audit[3397]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc49fcb560 a2=0 a3=7ffc49fcb54c items=0 ppid=3254 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.486000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:55.934791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114032538.mount: Deactivated successfully. Jun 25 16:23:56.859500 containerd[1794]: time="2024-06-25T16:23:56.859445773Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:56.861281 containerd[1794]: time="2024-06-25T16:23:56.861122389Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Jun 25 16:23:56.864816 containerd[1794]: time="2024-06-25T16:23:56.864773719Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:56.868494 containerd[1794]: time="2024-06-25T16:23:56.868454414Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:56.871372 containerd[1794]: time="2024-06-25T16:23:56.871325920Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:56.872321 containerd[1794]: time="2024-06-25T16:23:56.872278044Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.568785587s" Jun 25 16:23:56.872481 containerd[1794]: time="2024-06-25T16:23:56.872456290Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:23:56.875083 containerd[1794]: time="2024-06-25T16:23:56.875048135Z" level=info msg="CreateContainer within sandbox \"bf7c30da5952497f683b2add2c378f9f8a8c4be6cbc77b3abce04a4b7222fae4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:23:56.916386 containerd[1794]: time="2024-06-25T16:23:56.916245733Z" level=info msg="CreateContainer within sandbox \"bf7c30da5952497f683b2add2c378f9f8a8c4be6cbc77b3abce04a4b7222fae4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916\"" Jun 25 16:23:56.920450 containerd[1794]: time="2024-06-25T16:23:56.920403821Z" level=info msg="StartContainer for \"d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916\"" Jun 25 16:23:56.961105 systemd[1]: Started cri-containerd-d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916.scope - libcontainer container d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916. Jun 25 16:23:56.978000 audit: BPF prog-id=116 op=LOAD Jun 25 16:23:56.978000 audit: BPF prog-id=117 op=LOAD Jun 25 16:23:56.978000 audit[3414]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3202 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:56.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376431353436346662333363666563303135303765376165306539 Jun 25 16:23:56.979000 audit: BPF prog-id=118 op=LOAD Jun 25 16:23:56.979000 audit[3414]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3202 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:56.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376431353436346662333363666563303135303765376165306539 Jun 25 16:23:56.979000 audit: BPF prog-id=118 op=UNLOAD Jun 25 16:23:56.979000 audit: BPF prog-id=117 op=UNLOAD Jun 25 16:23:56.979000 audit: BPF prog-id=119 op=LOAD Jun 25 16:23:56.979000 audit[3414]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3202 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:56.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376431353436346662333363666563303135303765376165306539 Jun 25 16:23:57.000512 containerd[1794]: time="2024-06-25T16:23:57.000434687Z" level=info msg="StartContainer for \"d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916\" returns successfully" Jun 25 16:23:57.985042 kubelet[2901]: I0625 16:23:57.985008 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sx52h" podStartSLOduration=4.983518099 podCreationTimestamp="2024-06-25 16:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:55.067645188 +0000 UTC m=+15.598681532" watchObservedRunningTime="2024-06-25 16:23:57.983518099 +0000 UTC m=+18.514554439" Jun 25 16:23:57.986351 kubelet[2901]: I0625 16:23:57.986326 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-2jbms" podStartSLOduration=2.412813079 podCreationTimestamp="2024-06-25 16:23:53 +0000 UTC" firstStartedPulling="2024-06-25 16:23:54.299519157 +0000 UTC m=+14.830555480" lastFinishedPulling="2024-06-25 16:23:56.872975057 +0000 UTC m=+17.404011390" observedRunningTime="2024-06-25 16:23:57.986130749 +0000 UTC m=+18.517167078" watchObservedRunningTime="2024-06-25 16:23:57.986268989 +0000 UTC m=+18.517305331" Jun 25 16:24:00.358006 kernel: kauditd_printk_skb: 190 callbacks suppressed Jun 25 16:24:00.358161 kernel: audit: type=1325 audit(1719332640.355:469): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:00.355000 audit[3449]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:00.355000 audit[3449]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd9ea75ee0 a2=0 a3=7ffd9ea75ecc items=0 ppid=3254 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.362181 kernel: audit: type=1300 audit(1719332640.355:469): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd9ea75ee0 a2=0 a3=7ffd9ea75ecc items=0 ppid=3254 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.362312 kernel: audit: type=1327 audit(1719332640.355:469): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:00.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:00.356000 audit[3449]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:00.367070 kernel: audit: type=1325 audit(1719332640.356:470): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:00.356000 audit[3449]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9ea75ee0 a2=0 a3=0 items=0 ppid=3254 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.371171 kernel: audit: type=1300 audit(1719332640.356:470): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9ea75ee0 a2=0 a3=0 items=0 ppid=3254 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:00.373908 kernel: audit: type=1327 audit(1719332640.356:470): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:00.373000 audit[3451]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:00.373000 audit[3451]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc5ce07080 a2=0 a3=7ffc5ce0706c items=0 ppid=3254 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.380071 kernel: audit: type=1325 audit(1719332640.373:471): table=filter:91 family=2 entries=16 op=nft_register_rule pid=3451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:00.380234 kernel: audit: type=1300 audit(1719332640.373:471): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc5ce07080 a2=0 a3=7ffc5ce0706c items=0 ppid=3254 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.380278 kernel: audit: type=1327 audit(1719332640.373:471): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:00.373000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:00.375000 audit[3451]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:00.375000 audit[3451]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5ce07080 a2=0 a3=0 items=0 ppid=3254 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.385024 kernel: audit: type=1325 audit(1719332640.375:472): table=nat:92 family=2 entries=12 op=nft_register_rule pid=3451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:00.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:00.502304 kubelet[2901]: I0625 16:24:00.502261 2901 topology_manager.go:215] "Topology Admit Handler" podUID="82be6177-1aef-4de0-9da7-02d418af1307" podNamespace="calico-system" podName="calico-typha-67f58ff66c-jqgm9" Jun 25 16:24:00.512682 systemd[1]: Created slice kubepods-besteffort-pod82be6177_1aef_4de0_9da7_02d418af1307.slice - libcontainer container kubepods-besteffort-pod82be6177_1aef_4de0_9da7_02d418af1307.slice. Jun 25 16:24:00.524016 kubelet[2901]: W0625 16:24:00.523977 2901 reflector.go:535] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-29-32" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-32' and this object Jun 25 16:24:00.524266 kubelet[2901]: E0625 16:24:00.524251 2901 reflector.go:147] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-29-32" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-32' and this object Jun 25 16:24:00.524612 kubelet[2901]: W0625 16:24:00.524595 2901 reflector.go:535] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-29-32" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-32' and this object Jun 25 16:24:00.524728 kubelet[2901]: E0625 16:24:00.524717 2901 reflector.go:147] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-29-32" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-32' and this object Jun 25 16:24:00.526230 kubelet[2901]: W0625 16:24:00.526210 2901 reflector.go:535] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-29-32" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-32' and this object Jun 25 16:24:00.526365 kubelet[2901]: E0625 16:24:00.526354 2901 reflector.go:147] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-29-32" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-32' and this object Jun 25 16:24:00.645887 kubelet[2901]: I0625 16:24:00.645759 2901 topology_manager.go:215] "Topology Admit Handler" podUID="f062a170-8c89-4858-aa3e-e51414b54076" podNamespace="calico-system" podName="calico-node-5m4lw" Jun 25 16:24:00.647057 kubelet[2901]: I0625 16:24:00.647032 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/82be6177-1aef-4de0-9da7-02d418af1307-typha-certs\") pod \"calico-typha-67f58ff66c-jqgm9\" (UID: \"82be6177-1aef-4de0-9da7-02d418af1307\") " pod="calico-system/calico-typha-67f58ff66c-jqgm9" Jun 25 16:24:00.647242 kubelet[2901]: I0625 16:24:00.647219 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82be6177-1aef-4de0-9da7-02d418af1307-tigera-ca-bundle\") pod \"calico-typha-67f58ff66c-jqgm9\" (UID: \"82be6177-1aef-4de0-9da7-02d418af1307\") " pod="calico-system/calico-typha-67f58ff66c-jqgm9" Jun 25 16:24:00.647381 kubelet[2901]: I0625 16:24:00.647370 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzmj4\" (UniqueName: \"kubernetes.io/projected/82be6177-1aef-4de0-9da7-02d418af1307-kube-api-access-qzmj4\") pod \"calico-typha-67f58ff66c-jqgm9\" (UID: \"82be6177-1aef-4de0-9da7-02d418af1307\") " pod="calico-system/calico-typha-67f58ff66c-jqgm9" Jun 25 16:24:00.655094 systemd[1]: Created slice kubepods-besteffort-podf062a170_8c89_4858_aa3e_e51414b54076.slice - libcontainer container kubepods-besteffort-podf062a170_8c89_4858_aa3e_e51414b54076.slice. Jun 25 16:24:00.749752 kubelet[2901]: I0625 16:24:00.749701 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f062a170-8c89-4858-aa3e-e51414b54076-cni-bin-dir\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.750116 kubelet[2901]: I0625 16:24:00.750092 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f062a170-8c89-4858-aa3e-e51414b54076-node-certs\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.750325 kubelet[2901]: I0625 16:24:00.750311 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f062a170-8c89-4858-aa3e-e51414b54076-cni-log-dir\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.750439 kubelet[2901]: I0625 16:24:00.750426 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f062a170-8c89-4858-aa3e-e51414b54076-var-run-calico\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.750605 kubelet[2901]: I0625 16:24:00.750586 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f062a170-8c89-4858-aa3e-e51414b54076-xtables-lock\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.750686 kubelet[2901]: I0625 16:24:00.750642 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f062a170-8c89-4858-aa3e-e51414b54076-tigera-ca-bundle\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.750737 kubelet[2901]: I0625 16:24:00.750694 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f062a170-8c89-4858-aa3e-e51414b54076-var-lib-calico\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.750737 kubelet[2901]: I0625 16:24:00.750733 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f062a170-8c89-4858-aa3e-e51414b54076-flexvol-driver-host\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.750826 kubelet[2901]: I0625 16:24:00.750807 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f062a170-8c89-4858-aa3e-e51414b54076-cni-net-dir\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.750890 kubelet[2901]: I0625 16:24:00.750858 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f062a170-8c89-4858-aa3e-e51414b54076-policysync\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.751017 kubelet[2901]: I0625 16:24:00.750912 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f062a170-8c89-4858-aa3e-e51414b54076-lib-modules\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.751076 kubelet[2901]: I0625 16:24:00.751020 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fg7b\" (UniqueName: \"kubernetes.io/projected/f062a170-8c89-4858-aa3e-e51414b54076-kube-api-access-8fg7b\") pod \"calico-node-5m4lw\" (UID: \"f062a170-8c89-4858-aa3e-e51414b54076\") " pod="calico-system/calico-node-5m4lw" Jun 25 16:24:00.761210 kubelet[2901]: I0625 16:24:00.761169 2901 topology_manager.go:215] "Topology Admit Handler" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" podNamespace="calico-system" podName="csi-node-driver-bcwhx" Jun 25 16:24:00.761882 kubelet[2901]: E0625 16:24:00.761843 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcwhx" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" Jun 25 16:24:00.851682 kubelet[2901]: I0625 16:24:00.851572 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2bece7e7-c85d-4cea-8dc0-bcb503dd2a60-varrun\") pod \"csi-node-driver-bcwhx\" (UID: \"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60\") " pod="calico-system/csi-node-driver-bcwhx" Jun 25 16:24:00.851901 kubelet[2901]: I0625 16:24:00.851696 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rh7v\" (UniqueName: \"kubernetes.io/projected/2bece7e7-c85d-4cea-8dc0-bcb503dd2a60-kube-api-access-6rh7v\") pod \"csi-node-driver-bcwhx\" (UID: \"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60\") " pod="calico-system/csi-node-driver-bcwhx" Jun 25 16:24:00.851901 kubelet[2901]: I0625 16:24:00.851747 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bece7e7-c85d-4cea-8dc0-bcb503dd2a60-kubelet-dir\") pod \"csi-node-driver-bcwhx\" (UID: \"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60\") " pod="calico-system/csi-node-driver-bcwhx" Jun 25 16:24:00.851901 kubelet[2901]: I0625 16:24:00.851775 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2bece7e7-c85d-4cea-8dc0-bcb503dd2a60-socket-dir\") pod \"csi-node-driver-bcwhx\" (UID: \"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60\") " pod="calico-system/csi-node-driver-bcwhx" Jun 25 16:24:00.852046 kubelet[2901]: I0625 16:24:00.851954 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2bece7e7-c85d-4cea-8dc0-bcb503dd2a60-registration-dir\") pod \"csi-node-driver-bcwhx\" (UID: \"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60\") " pod="calico-system/csi-node-driver-bcwhx" Jun 25 16:24:00.860896 kubelet[2901]: E0625 16:24:00.856590 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.860896 kubelet[2901]: W0625 16:24:00.856624 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.860896 kubelet[2901]: E0625 16:24:00.856677 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.860896 kubelet[2901]: E0625 16:24:00.857000 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.860896 kubelet[2901]: W0625 16:24:00.857013 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.861847 kubelet[2901]: E0625 16:24:00.861816 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.863018 kubelet[2901]: E0625 16:24:00.862145 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.863018 kubelet[2901]: W0625 16:24:00.862159 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.863018 kubelet[2901]: E0625 16:24:00.862269 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.863018 kubelet[2901]: E0625 16:24:00.862415 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.863018 kubelet[2901]: W0625 16:24:00.862424 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.863018 kubelet[2901]: E0625 16:24:00.862444 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.863018 kubelet[2901]: E0625 16:24:00.862660 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.863018 kubelet[2901]: W0625 16:24:00.862669 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.863018 kubelet[2901]: E0625 16:24:00.862690 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.863018 kubelet[2901]: E0625 16:24:00.862924 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.863506 kubelet[2901]: W0625 16:24:00.862935 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.863506 kubelet[2901]: E0625 16:24:00.862954 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.863506 kubelet[2901]: E0625 16:24:00.863291 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.863506 kubelet[2901]: W0625 16:24:00.863302 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.863506 kubelet[2901]: E0625 16:24:00.863323 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.863738 kubelet[2901]: E0625 16:24:00.863530 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.863738 kubelet[2901]: W0625 16:24:00.863539 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.863738 kubelet[2901]: E0625 16:24:00.863555 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.863738 kubelet[2901]: E0625 16:24:00.863738 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.865549 kubelet[2901]: W0625 16:24:00.863746 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.865549 kubelet[2901]: E0625 16:24:00.863762 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.865549 kubelet[2901]: E0625 16:24:00.864009 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.865549 kubelet[2901]: W0625 16:24:00.864019 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.865549 kubelet[2901]: E0625 16:24:00.864035 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.869037 kubelet[2901]: E0625 16:24:00.869015 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.869200 kubelet[2901]: W0625 16:24:00.869181 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.869327 kubelet[2901]: E0625 16:24:00.869316 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.952727 kubelet[2901]: E0625 16:24:00.952626 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.952954 kubelet[2901]: W0625 16:24:00.952934 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.953069 kubelet[2901]: E0625 16:24:00.953059 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.953417 kubelet[2901]: E0625 16:24:00.953404 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.953586 kubelet[2901]: W0625 16:24:00.953499 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.953685 kubelet[2901]: E0625 16:24:00.953674 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.954040 kubelet[2901]: E0625 16:24:00.954028 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.954463 kubelet[2901]: W0625 16:24:00.954444 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.954749 kubelet[2901]: E0625 16:24:00.954570 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.955131 kubelet[2901]: E0625 16:24:00.955117 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.955219 kubelet[2901]: W0625 16:24:00.955208 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.955329 kubelet[2901]: E0625 16:24:00.955319 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.955598 kubelet[2901]: E0625 16:24:00.955588 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.955732 kubelet[2901]: W0625 16:24:00.955718 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.955824 kubelet[2901]: E0625 16:24:00.955814 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.956183 kubelet[2901]: E0625 16:24:00.956163 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.956183 kubelet[2901]: W0625 16:24:00.956179 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.956312 kubelet[2901]: E0625 16:24:00.956203 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.956849 kubelet[2901]: E0625 16:24:00.956833 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.956849 kubelet[2901]: W0625 16:24:00.956849 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.957249 kubelet[2901]: E0625 16:24:00.956894 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.957782 kubelet[2901]: E0625 16:24:00.957318 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.957782 kubelet[2901]: W0625 16:24:00.957332 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.957782 kubelet[2901]: E0625 16:24:00.957354 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.958153 kubelet[2901]: E0625 16:24:00.958129 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.958153 kubelet[2901]: W0625 16:24:00.958147 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.958275 kubelet[2901]: E0625 16:24:00.958256 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.958558 kubelet[2901]: E0625 16:24:00.958539 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.958558 kubelet[2901]: W0625 16:24:00.958555 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.958726 kubelet[2901]: E0625 16:24:00.958712 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.958819 kubelet[2901]: E0625 16:24:00.958765 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.958943 kubelet[2901]: W0625 16:24:00.958931 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.959065 kubelet[2901]: E0625 16:24:00.959050 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.959322 kubelet[2901]: E0625 16:24:00.959310 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.959405 kubelet[2901]: W0625 16:24:00.959394 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.959507 kubelet[2901]: E0625 16:24:00.959487 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.959834 kubelet[2901]: E0625 16:24:00.959821 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.959966 kubelet[2901]: W0625 16:24:00.959951 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.960079 kubelet[2901]: E0625 16:24:00.960068 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.960450 kubelet[2901]: E0625 16:24:00.960431 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.960450 kubelet[2901]: W0625 16:24:00.960447 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.960582 kubelet[2901]: E0625 16:24:00.960470 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.960682 kubelet[2901]: E0625 16:24:00.960668 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.960731 kubelet[2901]: W0625 16:24:00.960684 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.960731 kubelet[2901]: E0625 16:24:00.960700 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.961174 kubelet[2901]: E0625 16:24:00.961159 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.961264 kubelet[2901]: W0625 16:24:00.961252 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.961345 kubelet[2901]: E0625 16:24:00.961336 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.961642 kubelet[2901]: E0625 16:24:00.961631 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.961746 kubelet[2901]: W0625 16:24:00.961734 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.961841 kubelet[2901]: E0625 16:24:00.961831 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.962390 kubelet[2901]: E0625 16:24:00.962377 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.962504 kubelet[2901]: W0625 16:24:00.962491 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.962589 kubelet[2901]: E0625 16:24:00.962580 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.962958 kubelet[2901]: E0625 16:24:00.962945 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.963047 kubelet[2901]: W0625 16:24:00.963035 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.963270 kubelet[2901]: E0625 16:24:00.963257 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.963764 kubelet[2901]: E0625 16:24:00.963749 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.964074 kubelet[2901]: W0625 16:24:00.964058 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.964164 kubelet[2901]: E0625 16:24:00.964155 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.964477 kubelet[2901]: E0625 16:24:00.964465 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.964579 kubelet[2901]: W0625 16:24:00.964566 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.964667 kubelet[2901]: E0625 16:24:00.964659 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.965001 kubelet[2901]: E0625 16:24:00.964988 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.965091 kubelet[2901]: W0625 16:24:00.965079 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.965180 kubelet[2901]: E0625 16:24:00.965171 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.965470 kubelet[2901]: E0625 16:24:00.965458 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.965590 kubelet[2901]: W0625 16:24:00.965577 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.966459 kubelet[2901]: E0625 16:24:00.965924 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.966993 kubelet[2901]: E0625 16:24:00.966979 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.967097 kubelet[2901]: W0625 16:24:00.967082 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.967378 kubelet[2901]: E0625 16:24:00.967358 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.967527 kubelet[2901]: E0625 16:24:00.967517 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.967606 kubelet[2901]: W0625 16:24:00.967596 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.967787 kubelet[2901]: E0625 16:24:00.967777 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.968101 kubelet[2901]: E0625 16:24:00.968091 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.968254 kubelet[2901]: W0625 16:24:00.968242 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.968447 kubelet[2901]: E0625 16:24:00.968436 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.968766 kubelet[2901]: E0625 16:24:00.968755 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.968850 kubelet[2901]: W0625 16:24:00.968841 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.968969 kubelet[2901]: E0625 16:24:00.968960 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.969303 kubelet[2901]: E0625 16:24:00.969292 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.969386 kubelet[2901]: W0625 16:24:00.969376 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.969459 kubelet[2901]: E0625 16:24:00.969452 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.969759 kubelet[2901]: E0625 16:24:00.969749 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.969847 kubelet[2901]: W0625 16:24:00.969836 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.969983 kubelet[2901]: E0625 16:24:00.969973 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:00.970311 kubelet[2901]: E0625 16:24:00.970298 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:00.970412 kubelet[2901]: W0625 16:24:00.970400 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:00.970486 kubelet[2901]: E0625 16:24:00.970477 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.067201 kubelet[2901]: E0625 16:24:01.067174 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.067401 kubelet[2901]: W0625 16:24:01.067384 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.067595 kubelet[2901]: E0625 16:24:01.067570 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.067978 kubelet[2901]: E0625 16:24:01.067953 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.067978 kubelet[2901]: W0625 16:24:01.067971 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.068135 kubelet[2901]: E0625 16:24:01.068022 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.070005 kubelet[2901]: E0625 16:24:01.069976 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.070005 kubelet[2901]: W0625 16:24:01.069999 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.070206 kubelet[2901]: E0625 16:24:01.070025 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.070398 kubelet[2901]: E0625 16:24:01.070383 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.070497 kubelet[2901]: W0625 16:24:01.070484 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.070581 kubelet[2901]: E0625 16:24:01.070572 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.070931 kubelet[2901]: E0625 16:24:01.070916 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.071041 kubelet[2901]: W0625 16:24:01.071028 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.071136 kubelet[2901]: E0625 16:24:01.071127 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.071479 kubelet[2901]: E0625 16:24:01.071466 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.071586 kubelet[2901]: W0625 16:24:01.071574 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.071678 kubelet[2901]: E0625 16:24:01.071653 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.172644 kubelet[2901]: E0625 16:24:01.172611 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.172644 kubelet[2901]: W0625 16:24:01.172635 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.173033 kubelet[2901]: E0625 16:24:01.172680 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.173365 kubelet[2901]: E0625 16:24:01.173342 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.173365 kubelet[2901]: W0625 16:24:01.173361 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.173492 kubelet[2901]: E0625 16:24:01.173384 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.173736 kubelet[2901]: E0625 16:24:01.173716 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.173736 kubelet[2901]: W0625 16:24:01.173733 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.173888 kubelet[2901]: E0625 16:24:01.173754 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.174628 kubelet[2901]: E0625 16:24:01.174607 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.174628 kubelet[2901]: W0625 16:24:01.174627 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.174756 kubelet[2901]: E0625 16:24:01.174647 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.174948 kubelet[2901]: E0625 16:24:01.174931 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.174948 kubelet[2901]: W0625 16:24:01.174948 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.175088 kubelet[2901]: E0625 16:24:01.174965 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.175327 kubelet[2901]: E0625 16:24:01.175308 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.175327 kubelet[2901]: W0625 16:24:01.175327 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.175426 kubelet[2901]: E0625 16:24:01.175345 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.276296 kubelet[2901]: E0625 16:24:01.276267 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.276586 kubelet[2901]: W0625 16:24:01.276563 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.276725 kubelet[2901]: E0625 16:24:01.276712 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.277151 kubelet[2901]: E0625 16:24:01.277134 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.277274 kubelet[2901]: W0625 16:24:01.277260 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.277355 kubelet[2901]: E0625 16:24:01.277346 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.277986 kubelet[2901]: E0625 16:24:01.277967 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.277986 kubelet[2901]: W0625 16:24:01.277985 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.278146 kubelet[2901]: E0625 16:24:01.278007 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.278271 kubelet[2901]: E0625 16:24:01.278254 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.278325 kubelet[2901]: W0625 16:24:01.278272 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.278325 kubelet[2901]: E0625 16:24:01.278290 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.278532 kubelet[2901]: E0625 16:24:01.278514 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.278598 kubelet[2901]: W0625 16:24:01.278532 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.278598 kubelet[2901]: E0625 16:24:01.278550 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.278889 kubelet[2901]: E0625 16:24:01.278856 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.278959 kubelet[2901]: W0625 16:24:01.278889 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.278959 kubelet[2901]: E0625 16:24:01.278907 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.379752 kubelet[2901]: E0625 16:24:01.379719 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.380264 kubelet[2901]: W0625 16:24:01.380236 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.380400 kubelet[2901]: E0625 16:24:01.380388 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.380821 kubelet[2901]: E0625 16:24:01.380806 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.382504 kubelet[2901]: W0625 16:24:01.382473 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.382813 kubelet[2901]: E0625 16:24:01.382796 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.383903 kubelet[2901]: E0625 16:24:01.383337 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.384024 kubelet[2901]: W0625 16:24:01.383935 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.384024 kubelet[2901]: E0625 16:24:01.383976 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.387820 kubelet[2901]: E0625 16:24:01.387784 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.387820 kubelet[2901]: W0625 16:24:01.387816 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.388067 kubelet[2901]: E0625 16:24:01.387856 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.388250 kubelet[2901]: E0625 16:24:01.388227 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.388321 kubelet[2901]: W0625 16:24:01.388252 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.388321 kubelet[2901]: E0625 16:24:01.388273 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.388526 kubelet[2901]: E0625 16:24:01.388508 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.388526 kubelet[2901]: W0625 16:24:01.388526 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.388629 kubelet[2901]: E0625 16:24:01.388544 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.426000 audit[3519]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=3519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:01.426000 audit[3519]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd3f34e9a0 a2=0 a3=7ffd3f34e98c items=0 ppid=3254 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:01.427000 audit[3519]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:01.427000 audit[3519]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3f34e9a0 a2=0 a3=0 items=0 ppid=3254 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.427000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:01.489623 kubelet[2901]: E0625 16:24:01.489594 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.489816 kubelet[2901]: W0625 16:24:01.489795 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.489934 kubelet[2901]: E0625 16:24:01.489921 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.490309 kubelet[2901]: E0625 16:24:01.490295 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.490410 kubelet[2901]: W0625 16:24:01.490398 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.490511 kubelet[2901]: E0625 16:24:01.490502 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.490818 kubelet[2901]: E0625 16:24:01.490805 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.490934 kubelet[2901]: W0625 16:24:01.490921 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.491021 kubelet[2901]: E0625 16:24:01.491012 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.491321 kubelet[2901]: E0625 16:24:01.491310 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.491407 kubelet[2901]: W0625 16:24:01.491397 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.491527 kubelet[2901]: E0625 16:24:01.491519 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.492416 kubelet[2901]: E0625 16:24:01.492400 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.492516 kubelet[2901]: W0625 16:24:01.492504 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.492591 kubelet[2901]: E0625 16:24:01.492582 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.492949 kubelet[2901]: E0625 16:24:01.492936 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.493046 kubelet[2901]: W0625 16:24:01.493035 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.493134 kubelet[2901]: E0625 16:24:01.493126 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.594547 kubelet[2901]: E0625 16:24:01.594421 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.594547 kubelet[2901]: W0625 16:24:01.594462 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.594547 kubelet[2901]: E0625 16:24:01.594489 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.597358 kubelet[2901]: E0625 16:24:01.596977 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.597358 kubelet[2901]: W0625 16:24:01.597001 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.597358 kubelet[2901]: E0625 16:24:01.597033 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.597358 kubelet[2901]: E0625 16:24:01.597347 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.597358 kubelet[2901]: W0625 16:24:01.597359 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.597584 kubelet[2901]: E0625 16:24:01.597377 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.604587 kubelet[2901]: E0625 16:24:01.604557 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.604798 kubelet[2901]: W0625 16:24:01.604777 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.604957 kubelet[2901]: E0625 16:24:01.604943 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.605459 kubelet[2901]: E0625 16:24:01.605434 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.605574 kubelet[2901]: W0625 16:24:01.605560 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.605681 kubelet[2901]: E0625 16:24:01.605671 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.606104 kubelet[2901]: E0625 16:24:01.606088 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.606212 kubelet[2901]: W0625 16:24:01.606199 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.606763 kubelet[2901]: E0625 16:24:01.606746 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.763129 kubelet[2901]: E0625 16:24:01.762341 2901 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:24:01.769979 kubelet[2901]: E0625 16:24:01.769930 2901 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/82be6177-1aef-4de0-9da7-02d418af1307-tigera-ca-bundle podName:82be6177-1aef-4de0-9da7-02d418af1307 nodeName:}" failed. No retries permitted until 2024-06-25 16:24:02.262671385 +0000 UTC m=+22.793707728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/82be6177-1aef-4de0-9da7-02d418af1307-tigera-ca-bundle") pod "calico-typha-67f58ff66c-jqgm9" (UID: "82be6177-1aef-4de0-9da7-02d418af1307") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:24:01.770307 kubelet[2901]: E0625 16:24:01.770276 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.770307 kubelet[2901]: W0625 16:24:01.770294 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.770419 kubelet[2901]: E0625 16:24:01.770326 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.775896 kubelet[2901]: E0625 16:24:01.770954 2901 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jun 25 16:24:01.775896 kubelet[2901]: E0625 16:24:01.771045 2901 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82be6177-1aef-4de0-9da7-02d418af1307-typha-certs podName:82be6177-1aef-4de0-9da7-02d418af1307 nodeName:}" failed. No retries permitted until 2024-06-25 16:24:02.271020202 +0000 UTC m=+22.802056542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/82be6177-1aef-4de0-9da7-02d418af1307-typha-certs") pod "calico-typha-67f58ff66c-jqgm9" (UID: "82be6177-1aef-4de0-9da7-02d418af1307") : failed to sync secret cache: timed out waiting for the condition Jun 25 16:24:01.775896 kubelet[2901]: E0625 16:24:01.775246 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.775896 kubelet[2901]: W0625 16:24:01.775270 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.775896 kubelet[2901]: E0625 16:24:01.775310 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.778057 kubelet[2901]: E0625 16:24:01.778019 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.778057 kubelet[2901]: W0625 16:24:01.778052 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.778287 kubelet[2901]: E0625 16:24:01.778224 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.780092 kubelet[2901]: E0625 16:24:01.780045 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.780092 kubelet[2901]: W0625 16:24:01.780088 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.780301 kubelet[2901]: E0625 16:24:01.780240 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.780489 kubelet[2901]: E0625 16:24:01.780471 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.780489 kubelet[2901]: W0625 16:24:01.780488 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.780643 kubelet[2901]: E0625 16:24:01.780628 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.785035 kubelet[2901]: E0625 16:24:01.784995 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.785193 kubelet[2901]: W0625 16:24:01.785043 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.788719 kubelet[2901]: E0625 16:24:01.788687 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.791056 kubelet[2901]: E0625 16:24:01.791022 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.791056 kubelet[2901]: W0625 16:24:01.791052 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.791430 kubelet[2901]: E0625 16:24:01.791091 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.791821 kubelet[2901]: E0625 16:24:01.791795 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.791821 kubelet[2901]: W0625 16:24:01.791818 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.791986 kubelet[2901]: E0625 16:24:01.791909 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.792342 kubelet[2901]: E0625 16:24:01.792318 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.792624 kubelet[2901]: W0625 16:24:01.792348 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.792700 kubelet[2901]: E0625 16:24:01.792641 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.792829 kubelet[2901]: E0625 16:24:01.792794 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.792829 kubelet[2901]: W0625 16:24:01.792812 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.792950 kubelet[2901]: E0625 16:24:01.792834 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.793127 kubelet[2901]: E0625 16:24:01.793110 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.793193 kubelet[2901]: W0625 16:24:01.793127 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.793241 kubelet[2901]: E0625 16:24:01.793231 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.793456 kubelet[2901]: E0625 16:24:01.793437 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.793456 kubelet[2901]: W0625 16:24:01.793455 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.793840 kubelet[2901]: E0625 16:24:01.793824 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.794065 kubelet[2901]: E0625 16:24:01.794023 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.794065 kubelet[2901]: W0625 16:24:01.794040 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.794065 kubelet[2901]: E0625 16:24:01.794063 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.794477 kubelet[2901]: E0625 16:24:01.794437 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.794477 kubelet[2901]: W0625 16:24:01.794456 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.794592 kubelet[2901]: E0625 16:24:01.794553 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.795393 kubelet[2901]: E0625 16:24:01.795372 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.795393 kubelet[2901]: W0625 16:24:01.795391 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.795687 kubelet[2901]: E0625 16:24:01.795585 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.795857 kubelet[2901]: E0625 16:24:01.795839 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.795857 kubelet[2901]: W0625 16:24:01.795858 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.798109 kubelet[2901]: E0625 16:24:01.798081 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.800228 kubelet[2901]: E0625 16:24:01.800195 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.800228 kubelet[2901]: W0625 16:24:01.800225 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.800405 kubelet[2901]: E0625 16:24:01.800395 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.800853 kubelet[2901]: E0625 16:24:01.800729 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.800977 kubelet[2901]: W0625 16:24:01.800858 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.800977 kubelet[2901]: E0625 16:24:01.800903 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.802085 kubelet[2901]: E0625 16:24:01.802063 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.802085 kubelet[2901]: W0625 16:24:01.802084 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.802210 kubelet[2901]: E0625 16:24:01.802106 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.853194 kubelet[2901]: E0625 16:24:01.853059 2901 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:24:01.853194 kubelet[2901]: E0625 16:24:01.853168 2901 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f062a170-8c89-4858-aa3e-e51414b54076-tigera-ca-bundle podName:f062a170-8c89-4858-aa3e-e51414b54076 nodeName:}" failed. No retries permitted until 2024-06-25 16:24:02.353143429 +0000 UTC m=+22.884179756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/f062a170-8c89-4858-aa3e-e51414b54076-tigera-ca-bundle") pod "calico-node-5m4lw" (UID: "f062a170-8c89-4858-aa3e-e51414b54076") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:24:01.900024 kubelet[2901]: E0625 16:24:01.899991 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.900215 kubelet[2901]: W0625 16:24:01.900018 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.900215 kubelet[2901]: E0625 16:24:01.900062 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.913858 kubelet[2901]: E0625 16:24:01.913824 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.913858 kubelet[2901]: W0625 16:24:01.913852 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.914173 kubelet[2901]: E0625 16:24:01.913904 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:01.922615 kubelet[2901]: E0625 16:24:01.922581 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:01.922810 kubelet[2901]: W0625 16:24:01.922795 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:01.923826 kubelet[2901]: E0625 16:24:01.923806 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.025757 kubelet[2901]: E0625 16:24:02.025720 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.025757 kubelet[2901]: W0625 16:24:02.025748 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.026020 kubelet[2901]: E0625 16:24:02.025775 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.026094 kubelet[2901]: E0625 16:24:02.026071 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.026094 kubelet[2901]: W0625 16:24:02.026091 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.027817 kubelet[2901]: E0625 16:24:02.026110 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.028025 kubelet[2901]: E0625 16:24:02.028005 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.028150 kubelet[2901]: W0625 16:24:02.028134 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.028227 kubelet[2901]: E0625 16:24:02.028218 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.131113 kubelet[2901]: E0625 16:24:02.130387 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.131113 kubelet[2901]: W0625 16:24:02.130481 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.131113 kubelet[2901]: E0625 16:24:02.130605 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.131457 kubelet[2901]: E0625 16:24:02.131437 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.131525 kubelet[2901]: W0625 16:24:02.131458 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.131525 kubelet[2901]: E0625 16:24:02.131483 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.132127 kubelet[2901]: E0625 16:24:02.132108 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.132127 kubelet[2901]: W0625 16:24:02.132128 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.132240 kubelet[2901]: E0625 16:24:02.132151 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.236940 kubelet[2901]: E0625 16:24:02.236824 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.236940 kubelet[2901]: W0625 16:24:02.236856 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.236940 kubelet[2901]: E0625 16:24:02.236904 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.238448 kubelet[2901]: E0625 16:24:02.238350 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.238448 kubelet[2901]: W0625 16:24:02.238372 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.238448 kubelet[2901]: E0625 16:24:02.238402 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.238756 kubelet[2901]: E0625 16:24:02.238733 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.238756 kubelet[2901]: W0625 16:24:02.238752 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.238919 kubelet[2901]: E0625 16:24:02.238771 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.340035 kubelet[2901]: E0625 16:24:02.339996 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.340035 kubelet[2901]: W0625 16:24:02.340026 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.340359 kubelet[2901]: E0625 16:24:02.340055 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.340698 kubelet[2901]: E0625 16:24:02.340675 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.340698 kubelet[2901]: W0625 16:24:02.340693 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.340836 kubelet[2901]: E0625 16:24:02.340719 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.343987 kubelet[2901]: E0625 16:24:02.343946 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.343987 kubelet[2901]: W0625 16:24:02.343986 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.346217 kubelet[2901]: E0625 16:24:02.344273 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.348578 kubelet[2901]: E0625 16:24:02.348544 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.348915 kubelet[2901]: W0625 16:24:02.348892 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.351774 kubelet[2901]: E0625 16:24:02.351717 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.354842 kubelet[2901]: E0625 16:24:02.354779 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.355151 kubelet[2901]: W0625 16:24:02.355128 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.357547 kubelet[2901]: E0625 16:24:02.357523 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.363680 kubelet[2901]: E0625 16:24:02.363599 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.363680 kubelet[2901]: W0625 16:24:02.363652 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.363957 kubelet[2901]: E0625 16:24:02.363703 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.369225 kubelet[2901]: E0625 16:24:02.369100 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.369601 kubelet[2901]: W0625 16:24:02.369576 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.369964 kubelet[2901]: E0625 16:24:02.369938 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.371470 kubelet[2901]: E0625 16:24:02.371452 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.371609 kubelet[2901]: W0625 16:24:02.371594 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.371725 kubelet[2901]: E0625 16:24:02.371715 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.372160 kubelet[2901]: E0625 16:24:02.372149 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.372271 kubelet[2901]: W0625 16:24:02.372260 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.372369 kubelet[2901]: E0625 16:24:02.372360 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.402676 kubelet[2901]: E0625 16:24:02.391548 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.402676 kubelet[2901]: W0625 16:24:02.391576 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.402676 kubelet[2901]: E0625 16:24:02.391608 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.402676 kubelet[2901]: E0625 16:24:02.392010 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.402676 kubelet[2901]: W0625 16:24:02.392024 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.402676 kubelet[2901]: E0625 16:24:02.392045 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.402676 kubelet[2901]: E0625 16:24:02.392779 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.402676 kubelet[2901]: W0625 16:24:02.392793 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.402676 kubelet[2901]: E0625 16:24:02.392812 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.418292 kubelet[2901]: E0625 16:24:02.409090 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.418292 kubelet[2901]: W0625 16:24:02.409117 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.418292 kubelet[2901]: E0625 16:24:02.409148 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.446107 kubelet[2901]: E0625 16:24:02.446073 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.446107 kubelet[2901]: W0625 16:24:02.446101 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.446345 kubelet[2901]: E0625 16:24:02.446133 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.446506 kubelet[2901]: E0625 16:24:02.446487 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.446506 kubelet[2901]: W0625 16:24:02.446506 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.446617 kubelet[2901]: E0625 16:24:02.446524 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.446815 kubelet[2901]: E0625 16:24:02.446732 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.446815 kubelet[2901]: W0625 16:24:02.446747 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.446964 kubelet[2901]: E0625 16:24:02.446822 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.447087 kubelet[2901]: E0625 16:24:02.447072 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.447087 kubelet[2901]: W0625 16:24:02.447087 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.447204 kubelet[2901]: E0625 16:24:02.447103 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.447376 kubelet[2901]: E0625 16:24:02.447361 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.447600 kubelet[2901]: W0625 16:24:02.447376 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.447600 kubelet[2901]: E0625 16:24:02.447392 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.449296 kubelet[2901]: E0625 16:24:02.449270 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:02.449296 kubelet[2901]: W0625 16:24:02.449293 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:02.449566 kubelet[2901]: E0625 16:24:02.449316 2901 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:02.461596 containerd[1794]: time="2024-06-25T16:24:02.461533031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5m4lw,Uid:f062a170-8c89-4858-aa3e-e51414b54076,Namespace:calico-system,Attempt:0,}" Jun 25 16:24:02.522942 containerd[1794]: time="2024-06-25T16:24:02.522732079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:02.523121 containerd[1794]: time="2024-06-25T16:24:02.522965556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:02.523121 containerd[1794]: time="2024-06-25T16:24:02.523015700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:02.523121 containerd[1794]: time="2024-06-25T16:24:02.523043637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:02.568133 systemd[1]: Started cri-containerd-ab0a9ecd7e6f9aba4fa4b35bd0ba65bec83e6b76613c3b6a5c25b0506bc2f004.scope - libcontainer container ab0a9ecd7e6f9aba4fa4b35bd0ba65bec83e6b76613c3b6a5c25b0506bc2f004. Jun 25 16:24:02.601000 audit: BPF prog-id=120 op=LOAD Jun 25 16:24:02.603000 audit: BPF prog-id=121 op=LOAD Jun 25 16:24:02.603000 audit[3608]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3597 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162306139656364376536663961626134666134623335626430626136 Jun 25 16:24:02.603000 audit: BPF prog-id=122 op=LOAD Jun 25 16:24:02.603000 audit[3608]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3597 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162306139656364376536663961626134666134623335626430626136 Jun 25 16:24:02.603000 audit: BPF prog-id=122 op=UNLOAD Jun 25 16:24:02.603000 audit: BPF prog-id=121 op=UNLOAD Jun 25 16:24:02.603000 audit: BPF prog-id=123 op=LOAD Jun 25 16:24:02.603000 audit[3608]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3597 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162306139656364376536663961626134666134623335626430626136 Jun 25 16:24:02.627900 containerd[1794]: time="2024-06-25T16:24:02.627827685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5m4lw,Uid:f062a170-8c89-4858-aa3e-e51414b54076,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab0a9ecd7e6f9aba4fa4b35bd0ba65bec83e6b76613c3b6a5c25b0506bc2f004\"" Jun 25 16:24:02.629909 containerd[1794]: time="2024-06-25T16:24:02.629763420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67f58ff66c-jqgm9,Uid:82be6177-1aef-4de0-9da7-02d418af1307,Namespace:calico-system,Attempt:0,}" Jun 25 16:24:02.655352 containerd[1794]: time="2024-06-25T16:24:02.649884993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:24:02.696177 containerd[1794]: time="2024-06-25T16:24:02.694744487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:02.696177 containerd[1794]: time="2024-06-25T16:24:02.694804793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:02.696177 containerd[1794]: time="2024-06-25T16:24:02.694838881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:02.696177 containerd[1794]: time="2024-06-25T16:24:02.694860217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:02.779459 systemd[1]: Started cri-containerd-a7c67d57b31b780217073c1766ccc02ff82b1ac7111d495e9f613ac07d5edf6b.scope - libcontainer container a7c67d57b31b780217073c1766ccc02ff82b1ac7111d495e9f613ac07d5edf6b. Jun 25 16:24:02.789009 kubelet[2901]: E0625 16:24:02.788058 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcwhx" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" Jun 25 16:24:02.829000 audit: BPF prog-id=124 op=LOAD Jun 25 16:24:02.831000 audit: BPF prog-id=125 op=LOAD Jun 25 16:24:02.831000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3637 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633637643537623331623738303231373037336331373636636363 Jun 25 16:24:02.831000 audit: BPF prog-id=126 op=LOAD Jun 25 16:24:02.831000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3637 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633637643537623331623738303231373037336331373636636363 Jun 25 16:24:02.831000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:24:02.831000 audit: BPF prog-id=125 op=UNLOAD Jun 25 16:24:02.831000 audit: BPF prog-id=127 op=LOAD Jun 25 16:24:02.831000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3637 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633637643537623331623738303231373037336331373636636363 Jun 25 16:24:02.916415 containerd[1794]: time="2024-06-25T16:24:02.916276224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67f58ff66c-jqgm9,Uid:82be6177-1aef-4de0-9da7-02d418af1307,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7c67d57b31b780217073c1766ccc02ff82b1ac7111d495e9f613ac07d5edf6b\"" Jun 25 16:24:04.149679 containerd[1794]: time="2024-06-25T16:24:04.149628961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:04.152652 containerd[1794]: time="2024-06-25T16:24:04.152583465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:24:04.154620 containerd[1794]: time="2024-06-25T16:24:04.154578455Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:04.157456 containerd[1794]: time="2024-06-25T16:24:04.157409493Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:04.160093 containerd[1794]: time="2024-06-25T16:24:04.160050251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:04.161649 containerd[1794]: time="2024-06-25T16:24:04.161598748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.511658297s" Jun 25 16:24:04.161831 containerd[1794]: time="2024-06-25T16:24:04.161805094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:24:04.163188 containerd[1794]: time="2024-06-25T16:24:04.163155712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:24:04.164914 containerd[1794]: time="2024-06-25T16:24:04.164860284Z" level=info msg="CreateContainer within sandbox \"ab0a9ecd7e6f9aba4fa4b35bd0ba65bec83e6b76613c3b6a5c25b0506bc2f004\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:24:04.277175 containerd[1794]: time="2024-06-25T16:24:04.277116907Z" level=info msg="CreateContainer within sandbox \"ab0a9ecd7e6f9aba4fa4b35bd0ba65bec83e6b76613c3b6a5c25b0506bc2f004\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9\"" Jun 25 16:24:04.278247 containerd[1794]: time="2024-06-25T16:24:04.278207230Z" level=info msg="StartContainer for \"a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9\"" Jun 25 16:24:04.388241 systemd[1]: Started cri-containerd-a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9.scope - libcontainer container a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9. Jun 25 16:24:04.413858 systemd[1]: run-containerd-runc-k8s.io-a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9-runc.Pviycv.mount: Deactivated successfully. Jun 25 16:24:04.422000 audit: BPF prog-id=128 op=LOAD Jun 25 16:24:04.422000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3597 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:04.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135313064386461316432353665383831303662613836653732356235 Jun 25 16:24:04.422000 audit: BPF prog-id=129 op=LOAD Jun 25 16:24:04.422000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3597 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:04.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135313064386461316432353665383831303662613836653732356235 Jun 25 16:24:04.422000 audit: BPF prog-id=129 op=UNLOAD Jun 25 16:24:04.423000 audit: BPF prog-id=128 op=UNLOAD Jun 25 16:24:04.423000 audit: BPF prog-id=130 op=LOAD Jun 25 16:24:04.423000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3597 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:04.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135313064386461316432353665383831303662613836653732356235 Jun 25 16:24:04.448835 containerd[1794]: time="2024-06-25T16:24:04.448784966Z" level=info msg="StartContainer for \"a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9\" returns successfully" Jun 25 16:24:04.470597 systemd[1]: cri-containerd-a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9.scope: Deactivated successfully. Jun 25 16:24:04.476000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:24:04.516925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9-rootfs.mount: Deactivated successfully. Jun 25 16:24:04.762745 containerd[1794]: time="2024-06-25T16:24:04.734447797Z" level=info msg="shim disconnected" id=a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9 namespace=k8s.io Jun 25 16:24:04.763052 containerd[1794]: time="2024-06-25T16:24:04.762750485Z" level=warning msg="cleaning up after shim disconnected" id=a510d8da1d256e88106ba86e725b5999492368b6c58b714c18d73b3bccd384f9 namespace=k8s.io Jun 25 16:24:04.763052 containerd[1794]: time="2024-06-25T16:24:04.762778797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:24:04.782260 kubelet[2901]: E0625 16:24:04.781845 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcwhx" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" Jun 25 16:24:06.784263 kubelet[2901]: E0625 16:24:06.782814 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcwhx" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" Jun 25 16:24:06.944251 containerd[1794]: time="2024-06-25T16:24:06.944198092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:06.945858 containerd[1794]: time="2024-06-25T16:24:06.945795576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:24:06.947654 containerd[1794]: time="2024-06-25T16:24:06.947617287Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:06.950219 containerd[1794]: time="2024-06-25T16:24:06.950179436Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:06.953465 containerd[1794]: time="2024-06-25T16:24:06.953423856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:06.954387 containerd[1794]: time="2024-06-25T16:24:06.954338213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.790876814s" Jun 25 16:24:06.954548 containerd[1794]: time="2024-06-25T16:24:06.954394683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:24:06.969040 containerd[1794]: time="2024-06-25T16:24:06.959117853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:24:06.980213 containerd[1794]: time="2024-06-25T16:24:06.980170706Z" level=info msg="CreateContainer within sandbox \"a7c67d57b31b780217073c1766ccc02ff82b1ac7111d495e9f613ac07d5edf6b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:24:07.016265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153315948.mount: Deactivated successfully. Jun 25 16:24:07.043418 containerd[1794]: time="2024-06-25T16:24:07.042046847Z" level=info msg="CreateContainer within sandbox \"a7c67d57b31b780217073c1766ccc02ff82b1ac7111d495e9f613ac07d5edf6b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e5b7d837d69719ac4f034035c1c61dfd8f569bcca68ce0288422791527e5221f\"" Jun 25 16:24:07.052658 containerd[1794]: time="2024-06-25T16:24:07.052224592Z" level=info msg="StartContainer for \"e5b7d837d69719ac4f034035c1c61dfd8f569bcca68ce0288422791527e5221f\"" Jun 25 16:24:07.105088 systemd[1]: Started cri-containerd-e5b7d837d69719ac4f034035c1c61dfd8f569bcca68ce0288422791527e5221f.scope - libcontainer container e5b7d837d69719ac4f034035c1c61dfd8f569bcca68ce0288422791527e5221f. Jun 25 16:24:07.124000 audit: BPF prog-id=131 op=LOAD Jun 25 16:24:07.126355 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:24:07.126425 kernel: audit: type=1334 audit(1719332647.124:493): prog-id=131 op=LOAD Jun 25 16:24:07.126000 audit: BPF prog-id=132 op=LOAD Jun 25 16:24:07.130013 kernel: audit: type=1334 audit(1719332647.126:494): prog-id=132 op=LOAD Jun 25 16:24:07.130117 kernel: audit: type=1300 audit(1719332647.126:494): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001ab988 a2=78 a3=0 items=0 ppid=3637 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:07.126000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001ab988 a2=78 a3=0 items=0 ppid=3637 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:07.134060 kernel: audit: type=1327 audit(1719332647.126:494): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623764383337643639373139616334663033343033356331633631 Jun 25 16:24:07.134181 kernel: audit: type=1334 audit(1719332647.126:495): prog-id=133 op=LOAD Jun 25 16:24:07.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623764383337643639373139616334663033343033356331633631 Jun 25 16:24:07.126000 audit: BPF prog-id=133 op=LOAD Jun 25 16:24:07.138818 kernel: audit: type=1300 audit(1719332647.126:495): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001ab720 a2=78 a3=0 items=0 ppid=3637 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:07.126000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001ab720 a2=78 a3=0 items=0 ppid=3637 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:07.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623764383337643639373139616334663033343033356331633631 Jun 25 16:24:07.144339 kernel: audit: type=1327 audit(1719332647.126:495): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623764383337643639373139616334663033343033356331633631 Jun 25 16:24:07.144493 kernel: audit: type=1334 audit(1719332647.126:496): prog-id=133 op=UNLOAD Jun 25 16:24:07.126000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:24:07.145537 kernel: audit: type=1334 audit(1719332647.126:497): prog-id=132 op=UNLOAD Jun 25 16:24:07.126000 audit: BPF prog-id=132 op=UNLOAD Jun 25 16:24:07.146421 kernel: audit: type=1334 audit(1719332647.126:498): prog-id=134 op=LOAD Jun 25 16:24:07.126000 audit: BPF prog-id=134 op=LOAD Jun 25 16:24:07.126000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001abbe0 a2=78 a3=0 items=0 ppid=3637 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:07.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623764383337643639373139616334663033343033356331633631 Jun 25 16:24:07.218681 containerd[1794]: time="2024-06-25T16:24:07.218630636Z" level=info msg="StartContainer for \"e5b7d837d69719ac4f034035c1c61dfd8f569bcca68ce0288422791527e5221f\" returns successfully" Jun 25 16:24:08.782919 kubelet[2901]: E0625 16:24:08.781324 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcwhx" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" Jun 25 16:24:09.030467 kubelet[2901]: I0625 16:24:09.030428 2901 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:24:10.781252 kubelet[2901]: E0625 16:24:10.781123 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcwhx" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" Jun 25 16:24:11.965480 containerd[1794]: time="2024-06-25T16:24:11.965434926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:11.967601 containerd[1794]: time="2024-06-25T16:24:11.967545605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:24:11.969614 containerd[1794]: time="2024-06-25T16:24:11.969554271Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:11.972812 containerd[1794]: time="2024-06-25T16:24:11.972777757Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:11.975373 containerd[1794]: time="2024-06-25T16:24:11.975332323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:11.977026 containerd[1794]: time="2024-06-25T16:24:11.976419953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.0172283s" Jun 25 16:24:11.977264 containerd[1794]: time="2024-06-25T16:24:11.977036286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:24:11.981749 containerd[1794]: time="2024-06-25T16:24:11.981697927Z" level=info msg="CreateContainer within sandbox \"ab0a9ecd7e6f9aba4fa4b35bd0ba65bec83e6b76613c3b6a5c25b0506bc2f004\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:24:12.015351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970024402.mount: Deactivated successfully. Jun 25 16:24:12.031046 containerd[1794]: time="2024-06-25T16:24:12.030994932Z" level=info msg="CreateContainer within sandbox \"ab0a9ecd7e6f9aba4fa4b35bd0ba65bec83e6b76613c3b6a5c25b0506bc2f004\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f\"" Jun 25 16:24:12.033519 containerd[1794]: time="2024-06-25T16:24:12.031912399Z" level=info msg="StartContainer for \"db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f\"" Jun 25 16:24:12.149636 systemd[1]: run-containerd-runc-k8s.io-db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f-runc.CvICVH.mount: Deactivated successfully. Jun 25 16:24:12.156064 systemd[1]: Started cri-containerd-db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f.scope - libcontainer container db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f. Jun 25 16:24:12.187000 audit: BPF prog-id=135 op=LOAD Jun 25 16:24:12.189501 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:24:12.189611 kernel: audit: type=1334 audit(1719332652.187:499): prog-id=135 op=LOAD Jun 25 16:24:12.189649 kernel: audit: type=1300 audit(1719332652.187:499): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3597 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:12.187000 audit[3802]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3597 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:12.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353837623039343061393364396139373363323666326139653666 Jun 25 16:24:12.196407 kernel: audit: type=1327 audit(1719332652.187:499): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353837623039343061393364396139373363323666326139653666 Jun 25 16:24:12.196727 kernel: audit: type=1334 audit(1719332652.187:500): prog-id=136 op=LOAD Jun 25 16:24:12.196778 kernel: audit: type=1300 audit(1719332652.187:500): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3597 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:12.187000 audit: BPF prog-id=136 op=LOAD Jun 25 16:24:12.187000 audit[3802]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3597 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:12.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353837623039343061393364396139373363323666326139653666 Jun 25 16:24:12.187000 audit: BPF prog-id=136 op=UNLOAD Jun 25 16:24:12.211347 kernel: audit: type=1327 audit(1719332652.187:500): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353837623039343061393364396139373363323666326139653666 Jun 25 16:24:12.211407 kernel: audit: type=1334 audit(1719332652.187:501): prog-id=136 op=UNLOAD Jun 25 16:24:12.211439 kernel: audit: type=1334 audit(1719332652.187:502): prog-id=135 op=UNLOAD Jun 25 16:24:12.187000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:24:12.187000 audit: BPF prog-id=137 op=LOAD Jun 25 16:24:12.187000 audit[3802]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3597 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:12.219418 kernel: audit: type=1334 audit(1719332652.187:503): prog-id=137 op=LOAD Jun 25 16:24:12.219777 kernel: audit: type=1300 audit(1719332652.187:503): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3597 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:12.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353837623039343061393364396139373363323666326139653666 Jun 25 16:24:12.289343 containerd[1794]: time="2024-06-25T16:24:12.289289344Z" level=info msg="StartContainer for \"db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f\" returns successfully" Jun 25 16:24:12.781746 kubelet[2901]: E0625 16:24:12.781631 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcwhx" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" Jun 25 16:24:13.094005 kubelet[2901]: I0625 16:24:13.093890 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-67f58ff66c-jqgm9" podStartSLOduration=9.056632677 podCreationTimestamp="2024-06-25 16:24:00 +0000 UTC" firstStartedPulling="2024-06-25 16:24:02.917857278 +0000 UTC m=+23.448893608" lastFinishedPulling="2024-06-25 16:24:06.955037093 +0000 UTC m=+27.486073426" observedRunningTime="2024-06-25 16:24:08.04098617 +0000 UTC m=+28.572022512" watchObservedRunningTime="2024-06-25 16:24:13.093812495 +0000 UTC m=+33.624848841" Jun 25 16:24:13.408508 systemd[1]: cri-containerd-db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f.scope: Deactivated successfully. Jun 25 16:24:13.411000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:24:13.469451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f-rootfs.mount: Deactivated successfully. Jun 25 16:24:13.474249 kubelet[2901]: I0625 16:24:13.473614 2901 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:24:13.491585 containerd[1794]: time="2024-06-25T16:24:13.491492420Z" level=info msg="shim disconnected" id=db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f namespace=k8s.io Jun 25 16:24:13.492531 containerd[1794]: time="2024-06-25T16:24:13.492484113Z" level=warning msg="cleaning up after shim disconnected" id=db587b0940a93d9a973c26f2a9e6f2c671eef7f85967f045ef9bc16967cea76f namespace=k8s.io Jun 25 16:24:13.492531 containerd[1794]: time="2024-06-25T16:24:13.492510918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:24:13.533051 kubelet[2901]: I0625 16:24:13.531966 2901 topology_manager.go:215] "Topology Admit Handler" podUID="33705f7f-8163-4c78-a2e8-26b7380a9eca" podNamespace="calico-system" podName="calico-kube-controllers-777566c45b-m5859" Jun 25 16:24:13.533051 kubelet[2901]: I0625 16:24:13.532927 2901 topology_manager.go:215] "Topology Admit Handler" podUID="f5a51c64-0cb9-42e6-90f0-efef0dbd993c" podNamespace="kube-system" podName="coredns-5dd5756b68-8sldg" Jun 25 16:24:13.533722 kubelet[2901]: I0625 16:24:13.533106 2901 topology_manager.go:215] "Topology Admit Handler" podUID="f52a3793-af6f-4a8c-9790-d32a4489299c" podNamespace="kube-system" podName="coredns-5dd5756b68-qd8fw" Jun 25 16:24:13.544922 systemd[1]: Created slice kubepods-besteffort-pod33705f7f_8163_4c78_a2e8_26b7380a9eca.slice - libcontainer container kubepods-besteffort-pod33705f7f_8163_4c78_a2e8_26b7380a9eca.slice. Jun 25 16:24:13.565339 systemd[1]: Created slice kubepods-burstable-podf5a51c64_0cb9_42e6_90f0_efef0dbd993c.slice - libcontainer container kubepods-burstable-podf5a51c64_0cb9_42e6_90f0_efef0dbd993c.slice. Jun 25 16:24:13.567350 kubelet[2901]: I0625 16:24:13.567304 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5a51c64-0cb9-42e6-90f0-efef0dbd993c-config-volume\") pod \"coredns-5dd5756b68-8sldg\" (UID: \"f5a51c64-0cb9-42e6-90f0-efef0dbd993c\") " pod="kube-system/coredns-5dd5756b68-8sldg" Jun 25 16:24:13.569778 kubelet[2901]: I0625 16:24:13.569243 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33705f7f-8163-4c78-a2e8-26b7380a9eca-tigera-ca-bundle\") pod \"calico-kube-controllers-777566c45b-m5859\" (UID: \"33705f7f-8163-4c78-a2e8-26b7380a9eca\") " pod="calico-system/calico-kube-controllers-777566c45b-m5859" Jun 25 16:24:13.569778 kubelet[2901]: I0625 16:24:13.569459 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f52a3793-af6f-4a8c-9790-d32a4489299c-config-volume\") pod \"coredns-5dd5756b68-qd8fw\" (UID: \"f52a3793-af6f-4a8c-9790-d32a4489299c\") " pod="kube-system/coredns-5dd5756b68-qd8fw" Jun 25 16:24:13.569778 kubelet[2901]: I0625 16:24:13.569557 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmts\" (UniqueName: \"kubernetes.io/projected/f52a3793-af6f-4a8c-9790-d32a4489299c-kube-api-access-hkmts\") pod \"coredns-5dd5756b68-qd8fw\" (UID: \"f52a3793-af6f-4a8c-9790-d32a4489299c\") " pod="kube-system/coredns-5dd5756b68-qd8fw" Jun 25 16:24:13.569778 kubelet[2901]: I0625 16:24:13.569632 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t75w\" (UniqueName: \"kubernetes.io/projected/f5a51c64-0cb9-42e6-90f0-efef0dbd993c-kube-api-access-9t75w\") pod \"coredns-5dd5756b68-8sldg\" (UID: \"f5a51c64-0cb9-42e6-90f0-efef0dbd993c\") " pod="kube-system/coredns-5dd5756b68-8sldg" Jun 25 16:24:13.571537 kubelet[2901]: I0625 16:24:13.569709 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj96v\" (UniqueName: \"kubernetes.io/projected/33705f7f-8163-4c78-a2e8-26b7380a9eca-kube-api-access-dj96v\") pod \"calico-kube-controllers-777566c45b-m5859\" (UID: \"33705f7f-8163-4c78-a2e8-26b7380a9eca\") " pod="calico-system/calico-kube-controllers-777566c45b-m5859" Jun 25 16:24:13.580721 systemd[1]: Created slice kubepods-burstable-podf52a3793_af6f_4a8c_9790_d32a4489299c.slice - libcontainer container kubepods-burstable-podf52a3793_af6f_4a8c_9790_d32a4489299c.slice. Jun 25 16:24:13.850174 containerd[1794]: time="2024-06-25T16:24:13.850111363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777566c45b-m5859,Uid:33705f7f-8163-4c78-a2e8-26b7380a9eca,Namespace:calico-system,Attempt:0,}" Jun 25 16:24:13.879597 containerd[1794]: time="2024-06-25T16:24:13.879553411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8sldg,Uid:f5a51c64-0cb9-42e6-90f0-efef0dbd993c,Namespace:kube-system,Attempt:0,}" Jun 25 16:24:13.893975 containerd[1794]: time="2024-06-25T16:24:13.893926777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qd8fw,Uid:f52a3793-af6f-4a8c-9790-d32a4489299c,Namespace:kube-system,Attempt:0,}" Jun 25 16:24:14.075733 containerd[1794]: time="2024-06-25T16:24:14.074016329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:24:14.142180 containerd[1794]: time="2024-06-25T16:24:14.142001047Z" level=error msg="Failed to destroy network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.142973 containerd[1794]: time="2024-06-25T16:24:14.142921820Z" level=error msg="encountered an error cleaning up failed sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.143164 containerd[1794]: time="2024-06-25T16:24:14.143130227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777566c45b-m5859,Uid:33705f7f-8163-4c78-a2e8-26b7380a9eca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.146386 kubelet[2901]: E0625 16:24:14.145048 2901 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.146386 kubelet[2901]: E0625 16:24:14.145119 2901 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-777566c45b-m5859" Jun 25 16:24:14.146386 kubelet[2901]: E0625 16:24:14.145153 2901 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-777566c45b-m5859" Jun 25 16:24:14.146810 kubelet[2901]: E0625 16:24:14.145217 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-777566c45b-m5859_calico-system(33705f7f-8163-4c78-a2e8-26b7380a9eca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-777566c45b-m5859_calico-system(33705f7f-8163-4c78-a2e8-26b7380a9eca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-777566c45b-m5859" podUID="33705f7f-8163-4c78-a2e8-26b7380a9eca" Jun 25 16:24:14.193142 containerd[1794]: time="2024-06-25T16:24:14.193077396Z" level=error msg="Failed to destroy network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.193779 containerd[1794]: time="2024-06-25T16:24:14.193728461Z" level=error msg="encountered an error cleaning up failed sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.193993 containerd[1794]: time="2024-06-25T16:24:14.193956013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8sldg,Uid:f5a51c64-0cb9-42e6-90f0-efef0dbd993c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.202511 kubelet[2901]: E0625 16:24:14.202482 2901 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.203081 kubelet[2901]: E0625 16:24:14.203064 2901 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-8sldg" Jun 25 16:24:14.203236 kubelet[2901]: E0625 16:24:14.203226 2901 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-8sldg" Jun 25 16:24:14.203393 kubelet[2901]: E0625 16:24:14.203382 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-8sldg_kube-system(f5a51c64-0cb9-42e6-90f0-efef0dbd993c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-8sldg_kube-system(f5a51c64-0cb9-42e6-90f0-efef0dbd993c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-8sldg" podUID="f5a51c64-0cb9-42e6-90f0-efef0dbd993c" Jun 25 16:24:14.213539 containerd[1794]: time="2024-06-25T16:24:14.213476023Z" level=error msg="Failed to destroy network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.214108 containerd[1794]: time="2024-06-25T16:24:14.214061350Z" level=error msg="encountered an error cleaning up failed sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.214225 containerd[1794]: time="2024-06-25T16:24:14.214134681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qd8fw,Uid:f52a3793-af6f-4a8c-9790-d32a4489299c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.214482 kubelet[2901]: E0625 16:24:14.214450 2901 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.214572 kubelet[2901]: E0625 16:24:14.214509 2901 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qd8fw" Jun 25 16:24:14.214572 kubelet[2901]: E0625 16:24:14.214537 2901 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qd8fw" Jun 25 16:24:14.214668 kubelet[2901]: E0625 16:24:14.214605 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-qd8fw_kube-system(f52a3793-af6f-4a8c-9790-d32a4489299c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-qd8fw_kube-system(f52a3793-af6f-4a8c-9790-d32a4489299c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qd8fw" podUID="f52a3793-af6f-4a8c-9790-d32a4489299c" Jun 25 16:24:14.473028 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa-shm.mount: Deactivated successfully. Jun 25 16:24:14.789576 systemd[1]: Created slice kubepods-besteffort-pod2bece7e7_c85d_4cea_8dc0_bcb503dd2a60.slice - libcontainer container kubepods-besteffort-pod2bece7e7_c85d_4cea_8dc0_bcb503dd2a60.slice. Jun 25 16:24:14.795305 containerd[1794]: time="2024-06-25T16:24:14.795222223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bcwhx,Uid:2bece7e7-c85d-4cea-8dc0-bcb503dd2a60,Namespace:calico-system,Attempt:0,}" Jun 25 16:24:14.967039 containerd[1794]: time="2024-06-25T16:24:14.966975192Z" level=error msg="Failed to destroy network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.968019 containerd[1794]: time="2024-06-25T16:24:14.967414769Z" level=error msg="encountered an error cleaning up failed sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.968019 containerd[1794]: time="2024-06-25T16:24:14.967489330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bcwhx,Uid:2bece7e7-c85d-4cea-8dc0-bcb503dd2a60,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.973738 kubelet[2901]: E0625 16:24:14.970197 2901 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:14.973738 kubelet[2901]: E0625 16:24:14.970266 2901 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bcwhx" Jun 25 16:24:14.973738 kubelet[2901]: E0625 16:24:14.970298 2901 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bcwhx" Jun 25 16:24:14.972757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326-shm.mount: Deactivated successfully. Jun 25 16:24:14.974127 kubelet[2901]: E0625 16:24:14.970373 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bcwhx_calico-system(2bece7e7-c85d-4cea-8dc0-bcb503dd2a60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bcwhx_calico-system(2bece7e7-c85d-4cea-8dc0-bcb503dd2a60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bcwhx" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" Jun 25 16:24:15.076199 kubelet[2901]: I0625 16:24:15.075624 2901 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:15.088327 kubelet[2901]: I0625 16:24:15.079795 2901 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:15.088502 containerd[1794]: time="2024-06-25T16:24:15.088210447Z" level=info msg="StopPodSandbox for \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\"" Jun 25 16:24:15.105700 kubelet[2901]: I0625 16:24:15.105011 2901 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:15.108612 containerd[1794]: time="2024-06-25T16:24:15.107633138Z" level=info msg="StopPodSandbox for \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\"" Jun 25 16:24:15.116472 containerd[1794]: time="2024-06-25T16:24:15.115325598Z" level=info msg="Ensure that sandbox 1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa in task-service has been cleanup successfully" Jun 25 16:24:15.116713 containerd[1794]: time="2024-06-25T16:24:15.116657632Z" level=info msg="Ensure that sandbox 5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726 in task-service has been cleanup successfully" Jun 25 16:24:15.117725 containerd[1794]: time="2024-06-25T16:24:15.117676502Z" level=info msg="StopPodSandbox for \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\"" Jun 25 16:24:15.118011 containerd[1794]: time="2024-06-25T16:24:15.117969199Z" level=info msg="Ensure that sandbox e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01 in task-service has been cleanup successfully" Jun 25 16:24:15.119459 kubelet[2901]: I0625 16:24:15.119434 2901 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:15.122182 containerd[1794]: time="2024-06-25T16:24:15.121521505Z" level=info msg="StopPodSandbox for \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\"" Jun 25 16:24:15.122182 containerd[1794]: time="2024-06-25T16:24:15.121769489Z" level=info msg="Ensure that sandbox 3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326 in task-service has been cleanup successfully" Jun 25 16:24:15.268324 containerd[1794]: time="2024-06-25T16:24:15.268254026Z" level=error msg="StopPodSandbox for \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\" failed" error="failed to destroy network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:15.268690 kubelet[2901]: E0625 16:24:15.268664 2901 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:15.270115 kubelet[2901]: E0625 16:24:15.269748 2901 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326"} Jun 25 16:24:15.270115 kubelet[2901]: E0625 16:24:15.269828 2901 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:24:15.270115 kubelet[2901]: E0625 16:24:15.269931 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bcwhx" podUID="2bece7e7-c85d-4cea-8dc0-bcb503dd2a60" Jun 25 16:24:15.274051 containerd[1794]: time="2024-06-25T16:24:15.273980222Z" level=error msg="StopPodSandbox for \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\" failed" error="failed to destroy network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:15.274221 containerd[1794]: time="2024-06-25T16:24:15.274166857Z" level=error msg="StopPodSandbox for \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\" failed" error="failed to destroy network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:15.275013 kubelet[2901]: E0625 16:24:15.274493 2901 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:15.275013 kubelet[2901]: E0625 16:24:15.274562 2901 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa"} Jun 25 16:24:15.275013 kubelet[2901]: E0625 16:24:15.274664 2901 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33705f7f-8163-4c78-a2e8-26b7380a9eca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:24:15.275013 kubelet[2901]: E0625 16:24:15.274725 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33705f7f-8163-4c78-a2e8-26b7380a9eca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-777566c45b-m5859" podUID="33705f7f-8163-4c78-a2e8-26b7380a9eca" Jun 25 16:24:15.275549 kubelet[2901]: E0625 16:24:15.274795 2901 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:15.275549 kubelet[2901]: E0625 16:24:15.274819 2901 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726"} Jun 25 16:24:15.275549 kubelet[2901]: E0625 16:24:15.274915 2901 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5a51c64-0cb9-42e6-90f0-efef0dbd993c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:24:15.275549 kubelet[2901]: E0625 16:24:15.274951 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5a51c64-0cb9-42e6-90f0-efef0dbd993c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-8sldg" podUID="f5a51c64-0cb9-42e6-90f0-efef0dbd993c" Jun 25 16:24:15.280916 containerd[1794]: time="2024-06-25T16:24:15.280836991Z" level=error msg="StopPodSandbox for \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\" failed" error="failed to destroy network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:24:15.281836 kubelet[2901]: E0625 16:24:15.281360 2901 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:15.281836 kubelet[2901]: E0625 16:24:15.281409 2901 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01"} Jun 25 16:24:15.281836 kubelet[2901]: E0625 16:24:15.281469 2901 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f52a3793-af6f-4a8c-9790-d32a4489299c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:24:15.281836 kubelet[2901]: E0625 16:24:15.281693 2901 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f52a3793-af6f-4a8c-9790-d32a4489299c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qd8fw" podUID="f52a3793-af6f-4a8c-9790-d32a4489299c" Jun 25 16:24:20.541978 kubelet[2901]: I0625 16:24:20.541793 2901 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:24:20.745312 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:24:20.745461 kernel: audit: type=1325 audit(1719332660.742:505): table=filter:95 family=2 entries=15 op=nft_register_rule pid=4070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:20.742000 audit[4070]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=4070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:20.742000 audit[4070]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff109fa8d0 a2=0 a3=7fff109fa8bc items=0 ppid=3254 pid=4070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:20.764916 kernel: audit: type=1300 audit(1719332660.742:505): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff109fa8d0 a2=0 a3=7fff109fa8bc items=0 ppid=3254 pid=4070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:20.765061 kernel: audit: type=1327 audit(1719332660.742:505): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:20.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:20.774590 kernel: audit: type=1325 audit(1719332660.766:506): table=nat:96 family=2 entries=19 op=nft_register_chain pid=4070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:20.774799 kernel: audit: type=1300 audit(1719332660.766:506): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff109fa8d0 a2=0 a3=7fff109fa8bc items=0 ppid=3254 pid=4070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:20.766000 audit[4070]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=4070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:20.766000 audit[4070]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff109fa8d0 a2=0 a3=7fff109fa8bc items=0 ppid=3254 pid=4070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:20.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:20.777917 kernel: audit: type=1327 audit(1719332660.766:506): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:21.464462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1919022973.mount: Deactivated successfully. Jun 25 16:24:21.557009 containerd[1794]: time="2024-06-25T16:24:21.556954088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:21.564964 containerd[1794]: time="2024-06-25T16:24:21.564895008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:24:21.567924 containerd[1794]: time="2024-06-25T16:24:21.567851677Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:21.570547 containerd[1794]: time="2024-06-25T16:24:21.570508739Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:21.572770 containerd[1794]: time="2024-06-25T16:24:21.572733911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:21.573878 containerd[1794]: time="2024-06-25T16:24:21.573825015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.499751517s" Jun 25 16:24:21.574041 containerd[1794]: time="2024-06-25T16:24:21.574013612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:24:21.646115 containerd[1794]: time="2024-06-25T16:24:21.646064933Z" level=info msg="CreateContainer within sandbox \"ab0a9ecd7e6f9aba4fa4b35bd0ba65bec83e6b76613c3b6a5c25b0506bc2f004\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:24:21.729313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733098744.mount: Deactivated successfully. Jun 25 16:24:21.739368 containerd[1794]: time="2024-06-25T16:24:21.739309542Z" level=info msg="CreateContainer within sandbox \"ab0a9ecd7e6f9aba4fa4b35bd0ba65bec83e6b76613c3b6a5c25b0506bc2f004\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615\"" Jun 25 16:24:21.752246 containerd[1794]: time="2024-06-25T16:24:21.750078389Z" level=info msg="StartContainer for \"fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615\"" Jun 25 16:24:21.871860 systemd[1]: Started cri-containerd-fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615.scope - libcontainer container fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615. Jun 25 16:24:21.914912 kernel: audit: type=1334 audit(1719332661.909:507): prog-id=138 op=LOAD Jun 25 16:24:21.915038 kernel: audit: type=1300 audit(1719332661.909:507): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3597 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:21.915075 kernel: audit: type=1327 audit(1719332661.909:507): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663865333863346162663830633662656332353933323637313034 Jun 25 16:24:21.909000 audit: BPF prog-id=138 op=LOAD Jun 25 16:24:21.909000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3597 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:21.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663865333863346162663830633662656332353933323637313034 Jun 25 16:24:21.909000 audit: BPF prog-id=139 op=LOAD Jun 25 16:24:21.920880 kernel: audit: type=1334 audit(1719332661.909:508): prog-id=139 op=LOAD Jun 25 16:24:21.909000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3597 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:21.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663865333863346162663830633662656332353933323637313034 Jun 25 16:24:21.909000 audit: BPF prog-id=139 op=UNLOAD Jun 25 16:24:21.909000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:24:21.909000 audit: BPF prog-id=140 op=LOAD Jun 25 16:24:21.909000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3597 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:21.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663865333863346162663830633662656332353933323637313034 Jun 25 16:24:21.955243 containerd[1794]: time="2024-06-25T16:24:21.955112358Z" level=info msg="StartContainer for \"fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615\" returns successfully" Jun 25 16:24:22.182786 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:24:22.183382 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:24:23.168113 kubelet[2901]: I0625 16:24:23.168080 2901 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:24:23.751000 audit[4171]: AVC avc: denied { write } for pid=4171 comm="tee" name="fd" dev="proc" ino=26143 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:24:23.751000 audit[4171]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffdd674a1a a2=241 a3=1b6 items=1 ppid=4144 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:23.751000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:24:23.751000 audit: PATH item=0 name="/dev/fd/63" inode=26868 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:24:23.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:24:23.860000 audit[4187]: AVC avc: denied { write } for pid=4187 comm="tee" name="fd" dev="proc" ino=26897 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:24:23.860000 audit[4187]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc913a8a0b a2=241 a3=1b6 items=1 ppid=4146 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:23.860000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:24:23.860000 audit: PATH item=0 name="/dev/fd/63" inode=26884 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:24:23.860000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:24:23.904000 audit[4195]: AVC avc: denied { write } for pid=4195 comm="tee" name="fd" dev="proc" ino=26185 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:24:23.905000 audit[4199]: AVC avc: denied { write } for pid=4199 comm="tee" name="fd" dev="proc" ino=26909 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:24:23.905000 audit[4199]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc515d7a0a a2=241 a3=1b6 items=1 ppid=4160 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:23.905000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:24:23.905000 audit: PATH item=0 name="/dev/fd/63" inode=26903 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:24:23.905000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:24:23.904000 audit[4195]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff8a75ea1b a2=241 a3=1b6 items=1 ppid=4148 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:23.904000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:24:23.904000 audit: PATH item=0 name="/dev/fd/63" inode=26171 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:24:23.904000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:24:23.915000 audit[4197]: AVC avc: denied { write } for pid=4197 comm="tee" name="fd" dev="proc" ino=26193 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:24:23.915000 audit[4197]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe60baea1a a2=241 a3=1b6 items=1 ppid=4150 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:23.915000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:24:23.915000 audit: PATH item=0 name="/dev/fd/63" inode=26172 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:24:23.915000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:24:23.941498 systemd[1]: run-containerd-runc-k8s.io-fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615-runc.p6BVBd.mount: Deactivated successfully. Jun 25 16:24:23.947000 audit[4214]: AVC avc: denied { write } for pid=4214 comm="tee" name="fd" dev="proc" ino=26198 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:24:23.954000 audit[4216]: AVC avc: denied { write } for pid=4216 comm="tee" name="fd" dev="proc" ino=26201 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:24:23.947000 audit[4214]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea93f0a1c a2=241 a3=1b6 items=1 ppid=4157 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:23.947000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:24:23.947000 audit: PATH item=0 name="/dev/fd/63" inode=26179 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:24:23.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:24:23.954000 audit[4216]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff2ac55a1a a2=241 a3=1b6 items=1 ppid=4152 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:23.954000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:24:23.954000 audit: PATH item=0 name="/dev/fd/63" inode=26182 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:24:23.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:24:24.174330 kubelet[2901]: I0625 16:24:24.170965 2901 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:24:24.529097 systemd[1]: run-containerd-runc-k8s.io-fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615-runc.mADc57.mount: Deactivated successfully. Jun 25 16:24:24.787051 (udev-worker)[4112]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:24:24.790387 systemd-networkd[1527]: vxlan.calico: Link UP Jun 25 16:24:24.790397 systemd-networkd[1527]: vxlan.calico: Gained carrier Jun 25 16:24:24.881000 audit: BPF prog-id=141 op=LOAD Jun 25 16:24:24.881000 audit[4327]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdcbe553e0 a2=70 a3=7fe6f38e9000 items=0 ppid=4145 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:24.881000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:24:24.881000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:24:24.881000 audit: BPF prog-id=142 op=LOAD Jun 25 16:24:24.881000 audit[4327]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdcbe553e0 a2=70 a3=6f items=0 ppid=4145 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:24.881000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:24:24.881000 audit: BPF prog-id=142 op=UNLOAD Jun 25 16:24:24.881000 audit: BPF prog-id=143 op=LOAD Jun 25 16:24:24.881000 audit[4327]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdcbe55370 a2=70 a3=7ffdcbe553e0 items=0 ppid=4145 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:24.881000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:24:24.881000 audit: BPF prog-id=143 op=UNLOAD Jun 25 16:24:24.882000 audit: BPF prog-id=144 op=LOAD Jun 25 16:24:24.882000 audit[4327]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdcbe553a0 a2=70 a3=0 items=0 ppid=4145 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:24.882000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:24:24.883344 (udev-worker)[4330]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:24:24.926000 audit: BPF prog-id=144 op=UNLOAD Jun 25 16:24:25.023000 audit[4359]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:25.023000 audit[4359]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffde2b79fe0 a2=0 a3=7ffde2b79fcc items=0 ppid=4145 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.023000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:25.031000 audit[4357]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:25.031000 audit[4357]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcf4b17a40 a2=0 a3=7ffcf4b17a2c items=0 ppid=4145 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.031000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:25.034000 audit[4360]: NETFILTER_CFG table=filter:99 family=2 entries=39 op=nft_register_chain pid=4360 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:25.034000 audit[4360]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7fff49373510 a2=0 a3=7fff493734fc items=0 ppid=4145 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.034000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:25.043000 audit[4358]: NETFILTER_CFG table=raw:100 family=2 entries=19 op=nft_register_chain pid=4358 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:25.043000 audit[4358]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7fff1490fd20 a2=0 a3=7fff1490fd0c items=0 ppid=4145 pid=4358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.043000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:26.030013 systemd-networkd[1527]: vxlan.calico: Gained IPv6LL Jun 25 16:24:27.626729 systemd[1]: Started sshd@7-172.31.29.32:22-139.178.89.65:53686.service - OpenSSH per-connection server daemon (139.178.89.65:53686). Jun 25 16:24:27.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.32:22-139.178.89.65:53686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.629363 kernel: kauditd_printk_skb: 70 callbacks suppressed Jun 25 16:24:27.629434 kernel: audit: type=1130 audit(1719332667.627:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.32:22-139.178.89.65:53686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.784539 containerd[1794]: time="2024-06-25T16:24:27.783277771Z" level=info msg="StopPodSandbox for \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\"" Jun 25 16:24:27.784539 containerd[1794]: time="2024-06-25T16:24:27.784231858Z" level=info msg="StopPodSandbox for \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\"" Jun 25 16:24:27.873000 audit[4373]: USER_ACCT pid=4373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.886831 kernel: audit: type=1101 audit(1719332667.873:532): pid=4373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.888762 sshd[4373]: Accepted publickey for core from 139.178.89.65 port 53686 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:24:27.921895 kernel: audit: type=1103 audit(1719332667.903:533): pid=4373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.922030 kernel: audit: type=1006 audit(1719332667.903:534): pid=4373 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jun 25 16:24:27.923194 kernel: audit: type=1300 audit(1719332667.903:534): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc35ac69d0 a2=3 a3=7fd067ea5480 items=0 ppid=1 pid=4373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:27.923360 kernel: audit: type=1327 audit(1719332667.903:534): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:27.903000 audit[4373]: CRED_ACQ pid=4373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.903000 audit[4373]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc35ac69d0 a2=3 a3=7fd067ea5480 items=0 ppid=1 pid=4373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:27.903000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:27.911924 sshd[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:27.952024 systemd-logind[1784]: New session 8 of user core. Jun 25 16:24:27.953134 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:24:27.980175 kernel: audit: type=1105 audit(1719332667.971:535): pid=4373 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.980446 kernel: audit: type=1103 audit(1719332667.971:536): pid=4413 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.971000 audit[4373]: USER_START pid=4373 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.971000 audit[4413]: CRED_ACQ pid=4413 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:28.156592 kubelet[2901]: I0625 16:24:28.156465 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-5m4lw" podStartSLOduration=9.156983256 podCreationTimestamp="2024-06-25 16:24:00 +0000 UTC" firstStartedPulling="2024-06-25 16:24:02.648947824 +0000 UTC m=+23.179984149" lastFinishedPulling="2024-06-25 16:24:21.574364973 +0000 UTC m=+42.105401295" observedRunningTime="2024-06-25 16:24:22.195588214 +0000 UTC m=+42.726624557" watchObservedRunningTime="2024-06-25 16:24:28.082400402 +0000 UTC m=+48.613436741" Jun 25 16:24:28.300107 sshd[4373]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:28.308263 kernel: audit: type=1106 audit(1719332668.301:537): pid=4373 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:28.308390 kernel: audit: type=1104 audit(1719332668.302:538): pid=4373 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:28.301000 audit[4373]: USER_END pid=4373 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:28.302000 audit[4373]: CRED_DISP pid=4373 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:28.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.32:22-139.178.89.65:53686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:28.307144 systemd[1]: sshd@7-172.31.29.32:22-139.178.89.65:53686.service: Deactivated successfully. Jun 25 16:24:28.308278 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:24:28.309987 systemd-logind[1784]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:24:28.310983 systemd-logind[1784]: Removed session 8. Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.133 [INFO][4403] k8s.go 608: Cleaning up netns ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.133 [INFO][4403] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" iface="eth0" netns="/var/run/netns/cni-b1332ed8-c747-fd48-2761-28b277dc5e20" Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.136 [INFO][4403] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" iface="eth0" netns="/var/run/netns/cni-b1332ed8-c747-fd48-2761-28b277dc5e20" Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.136 [INFO][4403] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" iface="eth0" netns="/var/run/netns/cni-b1332ed8-c747-fd48-2761-28b277dc5e20" Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.136 [INFO][4403] k8s.go 615: Releasing IP address(es) ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.137 [INFO][4403] utils.go 188: Calico CNI releasing IP address ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.520 [INFO][4426] ipam_plugin.go 411: Releasing address using handleID ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" HandleID="k8s-pod-network.3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.523 [INFO][4426] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.526 [INFO][4426] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.548 [WARNING][4426] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" HandleID="k8s-pod-network.3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.548 [INFO][4426] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" HandleID="k8s-pod-network.3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.550 [INFO][4426] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:28.565481 containerd[1794]: 2024-06-25 16:24:28.559 [INFO][4403] k8s.go 621: Teardown processing complete. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:28.573131 systemd[1]: run-netns-cni\x2db1332ed8\x2dc747\x2dfd48\x2d2761\x2d28b277dc5e20.mount: Deactivated successfully. Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.095 [INFO][4405] k8s.go 608: Cleaning up netns ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.102 [INFO][4405] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" iface="eth0" netns="/var/run/netns/cni-ada7e1dd-cf88-6f53-eb54-8c56db697f30" Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.102 [INFO][4405] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" iface="eth0" netns="/var/run/netns/cni-ada7e1dd-cf88-6f53-eb54-8c56db697f30" Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.106 [INFO][4405] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" iface="eth0" netns="/var/run/netns/cni-ada7e1dd-cf88-6f53-eb54-8c56db697f30" Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.106 [INFO][4405] k8s.go 615: Releasing IP address(es) ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.106 [INFO][4405] utils.go 188: Calico CNI releasing IP address ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.519 [INFO][4421] ipam_plugin.go 411: Releasing address using handleID ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" HandleID="k8s-pod-network.e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.524 [INFO][4421] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.551 [INFO][4421] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.566 [WARNING][4421] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" HandleID="k8s-pod-network.e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.566 [INFO][4421] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" HandleID="k8s-pod-network.e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.573 [INFO][4421] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:28.593699 containerd[1794]: 2024-06-25 16:24:28.589 [INFO][4405] k8s.go 621: Teardown processing complete. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:28.601358 systemd[1]: run-netns-cni\x2dada7e1dd\x2dcf88\x2d6f53\x2deb54\x2d8c56db697f30.mount: Deactivated successfully. Jun 25 16:24:28.605757 containerd[1794]: time="2024-06-25T16:24:28.605707773Z" level=info msg="TearDown network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\" successfully" Jun 25 16:24:28.606147 containerd[1794]: time="2024-06-25T16:24:28.606061299Z" level=info msg="StopPodSandbox for \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\" returns successfully" Jun 25 16:24:28.607291 containerd[1794]: time="2024-06-25T16:24:28.607230360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qd8fw,Uid:f52a3793-af6f-4a8c-9790-d32a4489299c,Namespace:kube-system,Attempt:1,}" Jun 25 16:24:28.630426 containerd[1794]: time="2024-06-25T16:24:28.576020219Z" level=info msg="TearDown network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\" successfully" Jun 25 16:24:28.630709 containerd[1794]: time="2024-06-25T16:24:28.630675515Z" level=info msg="StopPodSandbox for \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\" returns successfully" Jun 25 16:24:28.631772 containerd[1794]: time="2024-06-25T16:24:28.631733789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bcwhx,Uid:2bece7e7-c85d-4cea-8dc0-bcb503dd2a60,Namespace:calico-system,Attempt:1,}" Jun 25 16:24:28.972377 (udev-worker)[4479]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:24:28.978977 systemd-networkd[1527]: cali8c406fc5b8e: Link UP Jun 25 16:24:28.983682 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:24:28.983762 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8c406fc5b8e: link becomes ready Jun 25 16:24:28.984113 systemd-networkd[1527]: cali8c406fc5b8e: Gained carrier Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.809 [INFO][4440] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0 csi-node-driver- calico-system 2bece7e7-c85d-4cea-8dc0-bcb503dd2a60 752 0 2024-06-25 16:24:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-29-32 csi-node-driver-bcwhx eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali8c406fc5b8e [] []}} ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Namespace="calico-system" Pod="csi-node-driver-bcwhx" WorkloadEndpoint="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.809 [INFO][4440] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Namespace="calico-system" Pod="csi-node-driver-bcwhx" WorkloadEndpoint="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.885 [INFO][4466] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" HandleID="k8s-pod-network.0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.906 [INFO][4466] ipam_plugin.go 264: Auto assigning IP ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" HandleID="k8s-pod-network.0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318360), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-32", "pod":"csi-node-driver-bcwhx", "timestamp":"2024-06-25 16:24:28.885419056 +0000 UTC"}, Hostname:"ip-172-31-29-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.906 [INFO][4466] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.907 [INFO][4466] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.907 [INFO][4466] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-32' Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.910 [INFO][4466] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" host="ip-172-31-29-32" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.930 [INFO][4466] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-32" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.936 [INFO][4466] ipam.go 489: Trying affinity for 192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.939 [INFO][4466] ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.943 [INFO][4466] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.943 [INFO][4466] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" host="ip-172-31-29-32" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.946 [INFO][4466] ipam.go 1685: Creating new handle: k8s-pod-network.0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0 Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.950 [INFO][4466] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" host="ip-172-31-29-32" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.958 [INFO][4466] ipam.go 1216: Successfully claimed IPs: [192.168.74.193/26] block=192.168.74.192/26 handle="k8s-pod-network.0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" host="ip-172-31-29-32" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.958 [INFO][4466] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.193/26] handle="k8s-pod-network.0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" host="ip-172-31-29-32" Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.958 [INFO][4466] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:29.018217 containerd[1794]: 2024-06-25 16:24:28.958 [INFO][4466] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.74.193/26] IPv6=[] ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" HandleID="k8s-pod-network.0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:29.020050 containerd[1794]: 2024-06-25 16:24:28.963 [INFO][4440] k8s.go 386: Populated endpoint ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Namespace="calico-system" Pod="csi-node-driver-bcwhx" WorkloadEndpoint="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"", Pod:"csi-node-driver-bcwhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8c406fc5b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:29.020050 containerd[1794]: 2024-06-25 16:24:28.963 [INFO][4440] k8s.go 387: Calico CNI using IPs: [192.168.74.193/32] ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Namespace="calico-system" Pod="csi-node-driver-bcwhx" WorkloadEndpoint="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:29.020050 containerd[1794]: 2024-06-25 16:24:28.963 [INFO][4440] dataplane_linux.go 68: Setting the host side veth name to cali8c406fc5b8e ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Namespace="calico-system" Pod="csi-node-driver-bcwhx" WorkloadEndpoint="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:29.020050 containerd[1794]: 2024-06-25 16:24:28.985 [INFO][4440] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Namespace="calico-system" Pod="csi-node-driver-bcwhx" WorkloadEndpoint="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:29.020050 containerd[1794]: 2024-06-25 16:24:28.986 [INFO][4440] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Namespace="calico-system" Pod="csi-node-driver-bcwhx" WorkloadEndpoint="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0", Pod:"csi-node-driver-bcwhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8c406fc5b8e", MAC:"46:87:f6:44:3d:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:29.020050 containerd[1794]: 2024-06-25 16:24:29.012 [INFO][4440] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0" Namespace="calico-system" Pod="csi-node-driver-bcwhx" WorkloadEndpoint="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:29.075000 audit[4494]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=4494 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:29.075000 audit[4494]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffe7860ec20 a2=0 a3=7ffe7860ec0c items=0 ppid=4145 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.075000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:29.101930 systemd-networkd[1527]: cali5f1dd597bfe: Link UP Jun 25 16:24:29.106158 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5f1dd597bfe: link becomes ready Jun 25 16:24:29.105797 systemd-networkd[1527]: cali5f1dd597bfe: Gained carrier Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:28.828 [INFO][4450] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0 coredns-5dd5756b68- kube-system f52a3793-af6f-4a8c-9790-d32a4489299c 751 0 2024-06-25 16:23:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-32 coredns-5dd5756b68-qd8fw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f1dd597bfe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Namespace="kube-system" Pod="coredns-5dd5756b68-qd8fw" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:28.833 [INFO][4450] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Namespace="kube-system" Pod="coredns-5dd5756b68-qd8fw" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:28.909 [INFO][4470] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" HandleID="k8s-pod-network.c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:28.930 [INFO][4470] ipam_plugin.go 264: Auto assigning IP ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" HandleID="k8s-pod-network.c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dded0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-32", "pod":"coredns-5dd5756b68-qd8fw", "timestamp":"2024-06-25 16:24:28.90950069 +0000 UTC"}, Hostname:"ip-172-31-29-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:28.930 [INFO][4470] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:28.958 [INFO][4470] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:28.958 [INFO][4470] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-32' Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:28.963 [INFO][4470] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" host="ip-172-31-29-32" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:28.990 [INFO][4470] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-32" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.025 [INFO][4470] ipam.go 489: Trying affinity for 192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.037 [INFO][4470] ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.044 [INFO][4470] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.044 [INFO][4470] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" host="ip-172-31-29-32" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.049 [INFO][4470] ipam.go 1685: Creating new handle: k8s-pod-network.c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373 Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.057 [INFO][4470] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" host="ip-172-31-29-32" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.081 [INFO][4470] ipam.go 1216: Successfully claimed IPs: [192.168.74.194/26] block=192.168.74.192/26 handle="k8s-pod-network.c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" host="ip-172-31-29-32" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.082 [INFO][4470] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.194/26] handle="k8s-pod-network.c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" host="ip-172-31-29-32" Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.082 [INFO][4470] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:29.143818 containerd[1794]: 2024-06-25 16:24:29.082 [INFO][4470] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.74.194/26] IPv6=[] ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" HandleID="k8s-pod-network.c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:29.144924 containerd[1794]: 2024-06-25 16:24:29.085 [INFO][4450] k8s.go 386: Populated endpoint ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Namespace="kube-system" Pod="coredns-5dd5756b68-qd8fw" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f52a3793-af6f-4a8c-9790-d32a4489299c", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"", Pod:"coredns-5dd5756b68-qd8fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f1dd597bfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:29.144924 containerd[1794]: 2024-06-25 16:24:29.085 [INFO][4450] k8s.go 387: Calico CNI using IPs: [192.168.74.194/32] ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Namespace="kube-system" Pod="coredns-5dd5756b68-qd8fw" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:29.144924 containerd[1794]: 2024-06-25 16:24:29.085 [INFO][4450] dataplane_linux.go 68: Setting the host side veth name to cali5f1dd597bfe ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Namespace="kube-system" Pod="coredns-5dd5756b68-qd8fw" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:29.144924 containerd[1794]: 2024-06-25 16:24:29.112 [INFO][4450] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Namespace="kube-system" Pod="coredns-5dd5756b68-qd8fw" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:29.144924 containerd[1794]: 2024-06-25 16:24:29.115 [INFO][4450] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Namespace="kube-system" Pod="coredns-5dd5756b68-qd8fw" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f52a3793-af6f-4a8c-9790-d32a4489299c", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373", Pod:"coredns-5dd5756b68-qd8fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f1dd597bfe", MAC:"e6:03:c6:12:73:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:29.144924 containerd[1794]: 2024-06-25 16:24:29.140 [INFO][4450] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373" Namespace="kube-system" Pod="coredns-5dd5756b68-qd8fw" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:29.238000 audit[4522]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=4522 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:29.238000 audit[4522]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffc4b779930 a2=0 a3=7ffc4b77991c items=0 ppid=4145 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.238000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:29.250996 containerd[1794]: time="2024-06-25T16:24:29.250494312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:29.251449 containerd[1794]: time="2024-06-25T16:24:29.251018722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:29.251449 containerd[1794]: time="2024-06-25T16:24:29.251101146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:29.251449 containerd[1794]: time="2024-06-25T16:24:29.251163102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:29.336480 systemd[1]: Started cri-containerd-0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0.scope - libcontainer container 0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0. Jun 25 16:24:29.359052 containerd[1794]: time="2024-06-25T16:24:29.354018705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:29.359052 containerd[1794]: time="2024-06-25T16:24:29.354078884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:29.359882 containerd[1794]: time="2024-06-25T16:24:29.359724207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:29.360256 containerd[1794]: time="2024-06-25T16:24:29.360148812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:29.406192 systemd[1]: Started cri-containerd-c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373.scope - libcontainer container c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373. Jun 25 16:24:29.430000 audit: BPF prog-id=145 op=LOAD Jun 25 16:24:29.430000 audit: BPF prog-id=146 op=LOAD Jun 25 16:24:29.430000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4540 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331353462646666366332373336663138363763316635313563663766 Jun 25 16:24:29.430000 audit: BPF prog-id=147 op=LOAD Jun 25 16:24:29.430000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4540 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331353462646666366332373336663138363763316635313563663766 Jun 25 16:24:29.430000 audit: BPF prog-id=147 op=UNLOAD Jun 25 16:24:29.430000 audit: BPF prog-id=146 op=UNLOAD Jun 25 16:24:29.430000 audit: BPF prog-id=148 op=LOAD Jun 25 16:24:29.430000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4540 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331353462646666366332373336663138363763316635313563663766 Jun 25 16:24:29.437000 audit: BPF prog-id=149 op=LOAD Jun 25 16:24:29.438000 audit: BPF prog-id=150 op=LOAD Jun 25 16:24:29.438000 audit[4530]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4509 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065366130623237333166656661633663306336333433323137376430 Jun 25 16:24:29.438000 audit: BPF prog-id=151 op=LOAD Jun 25 16:24:29.438000 audit[4530]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4509 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065366130623237333166656661633663306336333433323137376430 Jun 25 16:24:29.438000 audit: BPF prog-id=151 op=UNLOAD Jun 25 16:24:29.438000 audit: BPF prog-id=150 op=UNLOAD Jun 25 16:24:29.438000 audit: BPF prog-id=152 op=LOAD Jun 25 16:24:29.438000 audit[4530]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4509 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065366130623237333166656661633663306336333433323137376430 Jun 25 16:24:29.505250 containerd[1794]: time="2024-06-25T16:24:29.505112085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bcwhx,Uid:2bece7e7-c85d-4cea-8dc0-bcb503dd2a60,Namespace:calico-system,Attempt:1,} returns sandbox id \"0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0\"" Jun 25 16:24:29.506183 containerd[1794]: time="2024-06-25T16:24:29.506142696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qd8fw,Uid:f52a3793-af6f-4a8c-9790-d32a4489299c,Namespace:kube-system,Attempt:1,} returns sandbox id \"c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373\"" Jun 25 16:24:29.520754 containerd[1794]: time="2024-06-25T16:24:29.520700684Z" level=info msg="CreateContainer within sandbox \"c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:24:29.523029 containerd[1794]: time="2024-06-25T16:24:29.522971063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:24:29.594001 containerd[1794]: time="2024-06-25T16:24:29.593948971Z" level=info msg="CreateContainer within sandbox \"c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee65c97d273183f68b4a80719d61a446b77c3c0b436a138bac5799c06cc89922\"" Jun 25 16:24:29.597987 containerd[1794]: time="2024-06-25T16:24:29.597840543Z" level=info msg="StartContainer for \"ee65c97d273183f68b4a80719d61a446b77c3c0b436a138bac5799c06cc89922\"" Jun 25 16:24:29.647919 systemd[1]: Started cri-containerd-ee65c97d273183f68b4a80719d61a446b77c3c0b436a138bac5799c06cc89922.scope - libcontainer container ee65c97d273183f68b4a80719d61a446b77c3c0b436a138bac5799c06cc89922. Jun 25 16:24:29.668000 audit: BPF prog-id=153 op=LOAD Jun 25 16:24:29.668000 audit: BPF prog-id=154 op=LOAD Jun 25 16:24:29.668000 audit[4597]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4540 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363563393764323733313833663638623461383037313964363161 Jun 25 16:24:29.668000 audit: BPF prog-id=155 op=LOAD Jun 25 16:24:29.668000 audit[4597]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4540 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363563393764323733313833663638623461383037313964363161 Jun 25 16:24:29.668000 audit: BPF prog-id=155 op=UNLOAD Jun 25 16:24:29.668000 audit: BPF prog-id=154 op=UNLOAD Jun 25 16:24:29.668000 audit: BPF prog-id=156 op=LOAD Jun 25 16:24:29.668000 audit[4597]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4540 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363563393764323733313833663638623461383037313964363161 Jun 25 16:24:29.691263 containerd[1794]: time="2024-06-25T16:24:29.691214168Z" level=info msg="StartContainer for \"ee65c97d273183f68b4a80719d61a446b77c3c0b436a138bac5799c06cc89922\" returns successfully" Jun 25 16:24:29.783856 containerd[1794]: time="2024-06-25T16:24:29.782857182Z" level=info msg="StopPodSandbox for \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\"" Jun 25 16:24:29.784614 containerd[1794]: time="2024-06-25T16:24:29.783136912Z" level=info msg="StopPodSandbox for \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\"" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:29.992 [INFO][4645] k8s.go 608: Cleaning up netns ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:29.992 [INFO][4645] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" iface="eth0" netns="/var/run/netns/cni-7e050abb-244a-b77c-41a4-2cced9b44b7f" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:29.992 [INFO][4645] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" iface="eth0" netns="/var/run/netns/cni-7e050abb-244a-b77c-41a4-2cced9b44b7f" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:29.993 [INFO][4645] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" iface="eth0" netns="/var/run/netns/cni-7e050abb-244a-b77c-41a4-2cced9b44b7f" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:29.993 [INFO][4645] k8s.go 615: Releasing IP address(es) ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:29.993 [INFO][4645] utils.go 188: Calico CNI releasing IP address ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:30.098 [INFO][4662] ipam_plugin.go 411: Releasing address using handleID ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" HandleID="k8s-pod-network.1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:30.100 [INFO][4662] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:30.100 [INFO][4662] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:30.122 [WARNING][4662] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" HandleID="k8s-pod-network.1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:30.122 [INFO][4662] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" HandleID="k8s-pod-network.1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:30.127 [INFO][4662] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:30.133846 containerd[1794]: 2024-06-25 16:24:30.129 [INFO][4645] k8s.go 621: Teardown processing complete. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:30.138308 containerd[1794]: time="2024-06-25T16:24:30.135975969Z" level=info msg="TearDown network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\" successfully" Jun 25 16:24:30.138308 containerd[1794]: time="2024-06-25T16:24:30.136030960Z" level=info msg="StopPodSandbox for \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\" returns successfully" Jun 25 16:24:30.139725 containerd[1794]: time="2024-06-25T16:24:30.139680876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777566c45b-m5859,Uid:33705f7f-8163-4c78-a2e8-26b7380a9eca,Namespace:calico-system,Attempt:1,}" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.070 [INFO][4656] k8s.go 608: Cleaning up netns ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.070 [INFO][4656] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" iface="eth0" netns="/var/run/netns/cni-edeb1e75-a0fc-6111-a755-a7ed430d5e7b" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.070 [INFO][4656] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" iface="eth0" netns="/var/run/netns/cni-edeb1e75-a0fc-6111-a755-a7ed430d5e7b" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.071 [INFO][4656] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" iface="eth0" netns="/var/run/netns/cni-edeb1e75-a0fc-6111-a755-a7ed430d5e7b" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.071 [INFO][4656] k8s.go 615: Releasing IP address(es) ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.071 [INFO][4656] utils.go 188: Calico CNI releasing IP address ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.216 [INFO][4669] ipam_plugin.go 411: Releasing address using handleID ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" HandleID="k8s-pod-network.5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.222 [INFO][4669] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.222 [INFO][4669] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.244 [WARNING][4669] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" HandleID="k8s-pod-network.5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.244 [INFO][4669] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" HandleID="k8s-pod-network.5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.250 [INFO][4669] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:30.258416 containerd[1794]: 2024-06-25 16:24:30.255 [INFO][4656] k8s.go 621: Teardown processing complete. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:30.259203 containerd[1794]: time="2024-06-25T16:24:30.258572056Z" level=info msg="TearDown network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\" successfully" Jun 25 16:24:30.259203 containerd[1794]: time="2024-06-25T16:24:30.258610836Z" level=info msg="StopPodSandbox for \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\" returns successfully" Jun 25 16:24:30.265079 containerd[1794]: time="2024-06-25T16:24:30.265011419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8sldg,Uid:f5a51c64-0cb9-42e6-90f0-efef0dbd993c,Namespace:kube-system,Attempt:1,}" Jun 25 16:24:30.430000 audit[4711]: NETFILTER_CFG table=filter:103 family=2 entries=14 op=nft_register_rule pid=4711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:30.430000 audit[4711]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcbc614650 a2=0 a3=7ffcbc61463c items=0 ppid=3254 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:30.430000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:30.439000 audit[4711]: NETFILTER_CFG table=nat:104 family=2 entries=14 op=nft_register_rule pid=4711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:30.439000 audit[4711]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcbc614650 a2=0 a3=0 items=0 ppid=3254 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:30.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:30.582883 systemd[1]: run-netns-cni\x2dedeb1e75\x2da0fc\x2d6111\x2da755\x2da7ed430d5e7b.mount: Deactivated successfully. Jun 25 16:24:30.584262 systemd[1]: run-netns-cni\x2d7e050abb\x2d244a\x2db77c\x2d41a4\x2d2cced9b44b7f.mount: Deactivated successfully. Jun 25 16:24:30.626901 systemd-networkd[1527]: cali804ba4f2865: Link UP Jun 25 16:24:30.634374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:24:30.634498 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali804ba4f2865: link becomes ready Jun 25 16:24:30.633987 systemd-networkd[1527]: cali804ba4f2865: Gained carrier Jun 25 16:24:30.664250 kubelet[2901]: I0625 16:24:30.664213 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qd8fw" podStartSLOduration=37.664078867 podCreationTimestamp="2024-06-25 16:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:30.390568819 +0000 UTC m=+50.921605164" watchObservedRunningTime="2024-06-25 16:24:30.664078867 +0000 UTC m=+51.195115209" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.328 [INFO][4683] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0 calico-kube-controllers-777566c45b- calico-system 33705f7f-8163-4c78-a2e8-26b7380a9eca 773 0 2024-06-25 16:24:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:777566c45b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-32 calico-kube-controllers-777566c45b-m5859 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali804ba4f2865 [] []}} ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Namespace="calico-system" Pod="calico-kube-controllers-777566c45b-m5859" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.329 [INFO][4683] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Namespace="calico-system" Pod="calico-kube-controllers-777566c45b-m5859" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.508 [INFO][4696] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" HandleID="k8s-pod-network.971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.532 [INFO][4696] ipam_plugin.go 264: Auto assigning IP ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" HandleID="k8s-pod-network.971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000114ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-32", "pod":"calico-kube-controllers-777566c45b-m5859", "timestamp":"2024-06-25 16:24:30.508688181 +0000 UTC"}, Hostname:"ip-172-31-29-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.532 [INFO][4696] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.532 [INFO][4696] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.532 [INFO][4696] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-32' Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.536 [INFO][4696] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" host="ip-172-31-29-32" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.545 [INFO][4696] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-32" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.554 [INFO][4696] ipam.go 489: Trying affinity for 192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.559 [INFO][4696] ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.563 [INFO][4696] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.564 [INFO][4696] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" host="ip-172-31-29-32" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.567 [INFO][4696] ipam.go 1685: Creating new handle: k8s-pod-network.971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.604 [INFO][4696] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" host="ip-172-31-29-32" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.616 [INFO][4696] ipam.go 1216: Successfully claimed IPs: [192.168.74.195/26] block=192.168.74.192/26 handle="k8s-pod-network.971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" host="ip-172-31-29-32" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.617 [INFO][4696] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.195/26] handle="k8s-pod-network.971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" host="ip-172-31-29-32" Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.617 [INFO][4696] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:30.672601 containerd[1794]: 2024-06-25 16:24:30.617 [INFO][4696] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.74.195/26] IPv6=[] ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" HandleID="k8s-pod-network.971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.673653 containerd[1794]: 2024-06-25 16:24:30.621 [INFO][4683] k8s.go 386: Populated endpoint ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Namespace="calico-system" Pod="calico-kube-controllers-777566c45b-m5859" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0", GenerateName:"calico-kube-controllers-777566c45b-", Namespace:"calico-system", SelfLink:"", UID:"33705f7f-8163-4c78-a2e8-26b7380a9eca", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777566c45b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"", Pod:"calico-kube-controllers-777566c45b-m5859", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali804ba4f2865", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:30.673653 containerd[1794]: 2024-06-25 16:24:30.621 [INFO][4683] k8s.go 387: Calico CNI using IPs: [192.168.74.195/32] ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Namespace="calico-system" Pod="calico-kube-controllers-777566c45b-m5859" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.673653 containerd[1794]: 2024-06-25 16:24:30.621 [INFO][4683] dataplane_linux.go 68: Setting the host side veth name to cali804ba4f2865 ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Namespace="calico-system" Pod="calico-kube-controllers-777566c45b-m5859" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.673653 containerd[1794]: 2024-06-25 16:24:30.640 [INFO][4683] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Namespace="calico-system" Pod="calico-kube-controllers-777566c45b-m5859" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.673653 containerd[1794]: 2024-06-25 16:24:30.640 [INFO][4683] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Namespace="calico-system" Pod="calico-kube-controllers-777566c45b-m5859" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0", GenerateName:"calico-kube-controllers-777566c45b-", Namespace:"calico-system", SelfLink:"", UID:"33705f7f-8163-4c78-a2e8-26b7380a9eca", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777566c45b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d", Pod:"calico-kube-controllers-777566c45b-m5859", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali804ba4f2865", MAC:"66:43:0a:e0:09:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:30.673653 containerd[1794]: 2024-06-25 16:24:30.667 [INFO][4683] k8s.go 500: Wrote updated endpoint to datastore ContainerID="971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d" Namespace="calico-system" Pod="calico-kube-controllers-777566c45b-m5859" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:30.789000 audit[4750]: NETFILTER_CFG table=filter:105 family=2 entries=44 op=nft_register_chain pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:30.789000 audit[4750]: SYSCALL arch=c000003e syscall=46 success=yes exit=22680 a0=3 a1=7ffed6586090 a2=0 a3=7ffed658607c items=0 ppid=4145 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:30.789000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:30.756372 systemd-networkd[1527]: cali8c406fc5b8e: Gained IPv6LL Jun 25 16:24:30.832071 containerd[1794]: time="2024-06-25T16:24:30.831902775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:30.832613 containerd[1794]: time="2024-06-25T16:24:30.832560364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:30.832788 containerd[1794]: time="2024-06-25T16:24:30.832754089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:30.847645 containerd[1794]: time="2024-06-25T16:24:30.847491893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:30.948998 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib2853b6aafb: link becomes ready Jun 25 16:24:30.950632 systemd-networkd[1527]: calib2853b6aafb: Link UP Jun 25 16:24:30.951754 systemd-networkd[1527]: calib2853b6aafb: Gained carrier Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.609 [INFO][4702] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0 coredns-5dd5756b68- kube-system f5a51c64-0cb9-42e6-90f0-efef0dbd993c 774 0 2024-06-25 16:23:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-32 coredns-5dd5756b68-8sldg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2853b6aafb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Namespace="kube-system" Pod="coredns-5dd5756b68-8sldg" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.610 [INFO][4702] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Namespace="kube-system" Pod="coredns-5dd5756b68-8sldg" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.731 [INFO][4719] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" HandleID="k8s-pod-network.f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.752 [INFO][4719] ipam_plugin.go 264: Auto assigning IP ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" HandleID="k8s-pod-network.f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310000), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-32", "pod":"coredns-5dd5756b68-8sldg", "timestamp":"2024-06-25 16:24:30.727453511 +0000 UTC"}, Hostname:"ip-172-31-29-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.752 [INFO][4719] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.753 [INFO][4719] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.753 [INFO][4719] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-32' Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.822 [INFO][4719] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" host="ip-172-31-29-32" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.846 [INFO][4719] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-32" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.871 [INFO][4719] ipam.go 489: Trying affinity for 192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.883 [INFO][4719] ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.905 [INFO][4719] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.905 [INFO][4719] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" host="ip-172-31-29-32" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.909 [INFO][4719] ipam.go 1685: Creating new handle: k8s-pod-network.f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.917 [INFO][4719] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" host="ip-172-31-29-32" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.936 [INFO][4719] ipam.go 1216: Successfully claimed IPs: [192.168.74.196/26] block=192.168.74.192/26 handle="k8s-pod-network.f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" host="ip-172-31-29-32" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.937 [INFO][4719] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.196/26] handle="k8s-pod-network.f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" host="ip-172-31-29-32" Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.937 [INFO][4719] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:30.984640 containerd[1794]: 2024-06-25 16:24:30.937 [INFO][4719] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.74.196/26] IPv6=[] ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" HandleID="k8s-pod-network.f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:30.987628 containerd[1794]: 2024-06-25 16:24:30.940 [INFO][4702] k8s.go 386: Populated endpoint ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Namespace="kube-system" Pod="coredns-5dd5756b68-8sldg" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f5a51c64-0cb9-42e6-90f0-efef0dbd993c", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"", Pod:"coredns-5dd5756b68-8sldg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2853b6aafb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:30.987628 containerd[1794]: 2024-06-25 16:24:30.941 [INFO][4702] k8s.go 387: Calico CNI using IPs: [192.168.74.196/32] ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Namespace="kube-system" Pod="coredns-5dd5756b68-8sldg" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:30.987628 containerd[1794]: 2024-06-25 16:24:30.941 [INFO][4702] dataplane_linux.go 68: Setting the host side veth name to calib2853b6aafb ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Namespace="kube-system" Pod="coredns-5dd5756b68-8sldg" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:30.987628 containerd[1794]: 2024-06-25 16:24:30.944 [INFO][4702] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Namespace="kube-system" Pod="coredns-5dd5756b68-8sldg" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:30.987628 containerd[1794]: 2024-06-25 16:24:30.944 [INFO][4702] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Namespace="kube-system" Pod="coredns-5dd5756b68-8sldg" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f5a51c64-0cb9-42e6-90f0-efef0dbd993c", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b", Pod:"coredns-5dd5756b68-8sldg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2853b6aafb", MAC:"d2:29:40:e8:03:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:30.987628 containerd[1794]: 2024-06-25 16:24:30.980 [INFO][4702] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b" Namespace="kube-system" Pod="coredns-5dd5756b68-8sldg" WorkloadEndpoint="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:31.049103 systemd[1]: Started cri-containerd-971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d.scope - libcontainer container 971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d. Jun 25 16:24:31.076060 systemd-networkd[1527]: cali5f1dd597bfe: Gained IPv6LL Jun 25 16:24:31.077268 systemd[1]: run-containerd-runc-k8s.io-971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d-runc.vyo51U.mount: Deactivated successfully. Jun 25 16:24:31.121000 audit[4772]: NETFILTER_CFG table=filter:106 family=2 entries=34 op=nft_register_chain pid=4772 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:31.121000 audit[4772]: SYSCALL arch=c000003e syscall=46 success=yes exit=18204 a0=3 a1=7fffbad5b340 a2=0 a3=7fffbad5b32c items=0 ppid=4145 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.121000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:31.177000 audit: BPF prog-id=157 op=LOAD Jun 25 16:24:31.178000 audit: BPF prog-id=158 op=LOAD Jun 25 16:24:31.178000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316164656164373562346435626138313065336164633038633962 Jun 25 16:24:31.182000 audit: BPF prog-id=159 op=LOAD Jun 25 16:24:31.182000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316164656164373562346435626138313065336164633038633962 Jun 25 16:24:31.182000 audit: BPF prog-id=159 op=UNLOAD Jun 25 16:24:31.182000 audit: BPF prog-id=158 op=UNLOAD Jun 25 16:24:31.182000 audit: BPF prog-id=160 op=LOAD Jun 25 16:24:31.182000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316164656164373562346435626138313065336164633038633962 Jun 25 16:24:31.238340 containerd[1794]: time="2024-06-25T16:24:31.238201510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:31.238904 containerd[1794]: time="2024-06-25T16:24:31.238377197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:31.238904 containerd[1794]: time="2024-06-25T16:24:31.238449303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:31.239219 containerd[1794]: time="2024-06-25T16:24:31.239163796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:31.266144 systemd[1]: Started cri-containerd-f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b.scope - libcontainer container f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b. Jun 25 16:24:31.326000 audit: BPF prog-id=161 op=LOAD Jun 25 16:24:31.332000 audit: BPF prog-id=162 op=LOAD Jun 25 16:24:31.332000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4797 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634646463366461653736393462636433636237356131303961626562 Jun 25 16:24:31.335000 audit: BPF prog-id=163 op=LOAD Jun 25 16:24:31.335000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4797 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634646463366461653736393462636433636237356131303961626562 Jun 25 16:24:31.335000 audit: BPF prog-id=163 op=UNLOAD Jun 25 16:24:31.335000 audit: BPF prog-id=162 op=UNLOAD Jun 25 16:24:31.335000 audit: BPF prog-id=164 op=LOAD Jun 25 16:24:31.335000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4797 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634646463366461653736393462636433636237356131303961626562 Jun 25 16:24:31.386262 containerd[1794]: time="2024-06-25T16:24:31.386207337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777566c45b-m5859,Uid:33705f7f-8163-4c78-a2e8-26b7380a9eca,Namespace:calico-system,Attempt:1,} returns sandbox id \"971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d\"" Jun 25 16:24:31.511134 containerd[1794]: time="2024-06-25T16:24:31.510856805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8sldg,Uid:f5a51c64-0cb9-42e6-90f0-efef0dbd993c,Namespace:kube-system,Attempt:1,} returns sandbox id \"f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b\"" Jun 25 16:24:31.522261 containerd[1794]: time="2024-06-25T16:24:31.522215015Z" level=info msg="CreateContainer within sandbox \"f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:24:31.565000 audit[4840]: NETFILTER_CFG table=filter:107 family=2 entries=11 op=nft_register_rule pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:31.565000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffce42f5fc0 a2=0 a3=7ffce42f5fac items=0 ppid=3254 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:31.586000 audit[4840]: NETFILTER_CFG table=nat:108 family=2 entries=35 op=nft_register_chain pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:31.595031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394369705.mount: Deactivated successfully. Jun 25 16:24:31.601358 containerd[1794]: time="2024-06-25T16:24:31.601165209Z" level=info msg="CreateContainer within sandbox \"f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10a49b9354ccaccb45624533cb7991409583726233ad0df8427723011434cbba\"" Jun 25 16:24:31.603615 containerd[1794]: time="2024-06-25T16:24:31.602005956Z" level=info msg="StartContainer for \"10a49b9354ccaccb45624533cb7991409583726233ad0df8427723011434cbba\"" Jun 25 16:24:31.586000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffce42f5fc0 a2=0 a3=7ffce42f5fac items=0 ppid=3254 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:31.714121 systemd[1]: Started cri-containerd-10a49b9354ccaccb45624533cb7991409583726233ad0df8427723011434cbba.scope - libcontainer container 10a49b9354ccaccb45624533cb7991409583726233ad0df8427723011434cbba. Jun 25 16:24:31.738000 audit: BPF prog-id=165 op=LOAD Jun 25 16:24:31.748000 audit: BPF prog-id=166 op=LOAD Jun 25 16:24:31.748000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4797 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613439623933353463636163636234353632343533336362373939 Jun 25 16:24:31.748000 audit: BPF prog-id=167 op=LOAD Jun 25 16:24:31.748000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4797 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613439623933353463636163636234353632343533336362373939 Jun 25 16:24:31.748000 audit: BPF prog-id=167 op=UNLOAD Jun 25 16:24:31.749000 audit: BPF prog-id=166 op=UNLOAD Jun 25 16:24:31.749000 audit: BPF prog-id=168 op=LOAD Jun 25 16:24:31.749000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4797 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613439623933353463636163636234353632343533336362373939 Jun 25 16:24:31.792949 containerd[1794]: time="2024-06-25T16:24:31.790922023Z" level=info msg="StartContainer for \"10a49b9354ccaccb45624533cb7991409583726233ad0df8427723011434cbba\" returns successfully" Jun 25 16:24:31.884335 containerd[1794]: time="2024-06-25T16:24:31.884003604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:31.885972 containerd[1794]: time="2024-06-25T16:24:31.885909855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:24:31.887553 containerd[1794]: time="2024-06-25T16:24:31.887515972Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:31.890247 containerd[1794]: time="2024-06-25T16:24:31.890211490Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:31.893612 containerd[1794]: time="2024-06-25T16:24:31.893569893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:31.895002 containerd[1794]: time="2024-06-25T16:24:31.894960871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.371910177s" Jun 25 16:24:31.895096 containerd[1794]: time="2024-06-25T16:24:31.895008180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:24:31.897515 containerd[1794]: time="2024-06-25T16:24:31.897469790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:24:31.900083 containerd[1794]: time="2024-06-25T16:24:31.898537253Z" level=info msg="CreateContainer within sandbox \"0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:24:31.964855 containerd[1794]: time="2024-06-25T16:24:31.964807106Z" level=info msg="CreateContainer within sandbox \"0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"77bcc7e6cd27c3d68071841dc41ceaf2c76b69ac3facad42f4d8fb901f7d7390\"" Jun 25 16:24:31.966170 containerd[1794]: time="2024-06-25T16:24:31.966127762Z" level=info msg="StartContainer for \"77bcc7e6cd27c3d68071841dc41ceaf2c76b69ac3facad42f4d8fb901f7d7390\"" Jun 25 16:24:32.014091 systemd[1]: Started cri-containerd-77bcc7e6cd27c3d68071841dc41ceaf2c76b69ac3facad42f4d8fb901f7d7390.scope - libcontainer container 77bcc7e6cd27c3d68071841dc41ceaf2c76b69ac3facad42f4d8fb901f7d7390. Jun 25 16:24:32.062000 audit: BPF prog-id=169 op=LOAD Jun 25 16:24:32.062000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4509 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737626363376536636432376333643638303731383431646334316365 Jun 25 16:24:32.062000 audit: BPF prog-id=170 op=LOAD Jun 25 16:24:32.062000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4509 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737626363376536636432376333643638303731383431646334316365 Jun 25 16:24:32.062000 audit: BPF prog-id=170 op=UNLOAD Jun 25 16:24:32.062000 audit: BPF prog-id=169 op=UNLOAD Jun 25 16:24:32.062000 audit: BPF prog-id=171 op=LOAD Jun 25 16:24:32.062000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4509 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737626363376536636432376333643638303731383431646334316365 Jun 25 16:24:32.089587 containerd[1794]: time="2024-06-25T16:24:32.089532171Z" level=info msg="StartContainer for \"77bcc7e6cd27c3d68071841dc41ceaf2c76b69ac3facad42f4d8fb901f7d7390\" returns successfully" Jun 25 16:24:32.294969 systemd-networkd[1527]: cali804ba4f2865: Gained IPv6LL Jun 25 16:24:32.485347 kubelet[2901]: I0625 16:24:32.485315 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8sldg" podStartSLOduration=39.485266762 podCreationTimestamp="2024-06-25 16:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:32.454214667 +0000 UTC m=+52.985251009" watchObservedRunningTime="2024-06-25 16:24:32.485266762 +0000 UTC m=+53.016303108" Jun 25 16:24:32.592728 systemd[1]: run-containerd-runc-k8s.io-10a49b9354ccaccb45624533cb7991409583726233ad0df8427723011434cbba-runc.XLDwM8.mount: Deactivated successfully. Jun 25 16:24:32.632000 audit[4921]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:32.636134 kernel: kauditd_printk_skb: 108 callbacks suppressed Jun 25 16:24:32.636249 kernel: audit: type=1325 audit(1719332672.632:589): table=filter:109 family=2 entries=8 op=nft_register_rule pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:32.636288 kernel: audit: type=1300 audit(1719332672.632:589): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff5fb88090 a2=0 a3=7fff5fb8807c items=0 ppid=3254 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.632000 audit[4921]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff5fb88090 a2=0 a3=7fff5fb8807c items=0 ppid=3254 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:32.641060 kernel: audit: type=1327 audit(1719332672.632:589): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:32.641000 audit[4921]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:32.648035 kernel: audit: type=1325 audit(1719332672.641:590): table=nat:110 family=2 entries=44 op=nft_register_rule pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:32.648127 kernel: audit: type=1300 audit(1719332672.641:590): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff5fb88090 a2=0 a3=7fff5fb8807c items=0 ppid=3254 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.641000 audit[4921]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff5fb88090 a2=0 a3=7fff5fb8807c items=0 ppid=3254 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.650323 kernel: audit: type=1327 audit(1719332672.641:590): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:32.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:32.657000 audit[4923]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:32.657000 audit[4923]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffec3f87320 a2=0 a3=7ffec3f8730c items=0 ppid=3254 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.663722 kernel: audit: type=1325 audit(1719332672.657:591): table=filter:111 family=2 entries=8 op=nft_register_rule pid=4923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:32.663811 kernel: audit: type=1300 audit(1719332672.657:591): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffec3f87320 a2=0 a3=7ffec3f8730c items=0 ppid=3254 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:32.665893 kernel: audit: type=1327 audit(1719332672.657:591): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:32.674000 audit[4923]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:32.677987 kernel: audit: type=1325 audit(1719332672.674:592): table=nat:112 family=2 entries=56 op=nft_register_chain pid=4923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:32.674000 audit[4923]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffec3f87320 a2=0 a3=7ffec3f8730c items=0 ppid=3254 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:32.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:32.804620 systemd-networkd[1527]: calib2853b6aafb: Gained IPv6LL Jun 25 16:24:33.171000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:33.171000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0009ce6a0 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:24:33.171000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:33.171000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:33.171000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000e46930 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:24:33.171000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:33.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.32:22-139.178.89.65:53702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:33.333362 systemd[1]: Started sshd@8-172.31.29.32:22-139.178.89.65:53702.service - OpenSSH per-connection server daemon (139.178.89.65:53702). Jun 25 16:24:33.749000 audit[4927]: USER_ACCT pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:33.753000 audit[4927]: CRED_ACQ pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:33.754304 sshd[4927]: Accepted publickey for core from 139.178.89.65 port 53702 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:24:33.755000 audit[4927]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff82995aa0 a2=3 a3=7f002c99f480 items=0 ppid=1 pid=4927 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:33.755000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:33.757508 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:33.773577 systemd-logind[1784]: New session 9 of user core. Jun 25 16:24:33.784337 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:24:33.794000 audit[4927]: USER_START pid=4927 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:33.797000 audit[4933]: CRED_ACQ pid=4933 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:34.430911 sshd[4927]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:34.450000 audit[4927]: USER_END pid=4927 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:34.452000 audit[4927]: CRED_DISP pid=4927 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:34.458154 systemd[1]: sshd@8-172.31.29.32:22-139.178.89.65:53702.service: Deactivated successfully. Jun 25 16:24:34.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.32:22-139.178.89.65:53702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:34.461832 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:24:34.465593 systemd-logind[1784]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:24:34.468482 systemd-logind[1784]: Removed session 9. Jun 25 16:24:35.482000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6322 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:35.482000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00c34b140 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:24:35.482000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:24:35.499000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:35.499000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c003d3d8e0 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:24:35.499000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:24:35.500000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6307 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:35.500000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c00c34b290 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:24:35.500000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:24:35.501000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:35.501000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c00c34b2c0 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:24:35.501000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:24:35.524000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:35.524000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c003175820 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:24:35.524000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:24:35.524000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:35.524000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c00c34b950 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:24:35.524000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:24:35.773901 containerd[1794]: time="2024-06-25T16:24:35.773752173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:35.838647 containerd[1794]: time="2024-06-25T16:24:35.838575691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:24:35.850023 containerd[1794]: time="2024-06-25T16:24:35.849973890Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:35.859831 containerd[1794]: time="2024-06-25T16:24:35.859790845Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:35.867812 containerd[1794]: time="2024-06-25T16:24:35.867772094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:35.868968 containerd[1794]: time="2024-06-25T16:24:35.868913099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.971207727s" Jun 25 16:24:35.868968 containerd[1794]: time="2024-06-25T16:24:35.868960533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:24:35.870125 containerd[1794]: time="2024-06-25T16:24:35.870092455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:24:35.946583 containerd[1794]: time="2024-06-25T16:24:35.945646778Z" level=info msg="CreateContainer within sandbox \"971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:24:36.381721 containerd[1794]: time="2024-06-25T16:24:36.381658897Z" level=info msg="CreateContainer within sandbox \"971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479\"" Jun 25 16:24:36.382693 containerd[1794]: time="2024-06-25T16:24:36.382651102Z" level=info msg="StartContainer for \"4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479\"" Jun 25 16:24:36.439075 systemd[1]: Started cri-containerd-4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479.scope - libcontainer container 4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479. Jun 25 16:24:36.475000 audit: BPF prog-id=172 op=LOAD Jun 25 16:24:36.476000 audit: BPF prog-id=173 op=LOAD Jun 25 16:24:36.476000 audit[4967]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4746 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:36.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433383464346636643966343066646434313531363735383563653462 Jun 25 16:24:36.476000 audit: BPF prog-id=174 op=LOAD Jun 25 16:24:36.476000 audit[4967]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4746 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:36.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433383464346636643966343066646434313531363735383563653462 Jun 25 16:24:36.476000 audit: BPF prog-id=174 op=UNLOAD Jun 25 16:24:36.477000 audit: BPF prog-id=173 op=UNLOAD Jun 25 16:24:36.477000 audit: BPF prog-id=175 op=LOAD Jun 25 16:24:36.477000 audit[4967]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4746 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:36.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433383464346636643966343066646434313531363735383563653462 Jun 25 16:24:36.552352 containerd[1794]: time="2024-06-25T16:24:36.552305094Z" level=info msg="StartContainer for \"4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479\" returns successfully" Jun 25 16:24:36.910830 systemd[1]: run-containerd-runc-k8s.io-4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479-runc.gJRnyt.mount: Deactivated successfully. Jun 25 16:24:37.616898 systemd[1]: run-containerd-runc-k8s.io-4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479-runc.nGJLS4.mount: Deactivated successfully. Jun 25 16:24:37.739969 kubelet[2901]: I0625 16:24:37.735893 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-777566c45b-m5859" podStartSLOduration=33.261706418 podCreationTimestamp="2024-06-25 16:24:00 +0000 UTC" firstStartedPulling="2024-06-25 16:24:31.395527341 +0000 UTC m=+51.926563681" lastFinishedPulling="2024-06-25 16:24:35.869624714 +0000 UTC m=+56.400661050" observedRunningTime="2024-06-25 16:24:37.540958993 +0000 UTC m=+58.071995335" watchObservedRunningTime="2024-06-25 16:24:37.735803787 +0000 UTC m=+58.266840126" Jun 25 16:24:37.836209 containerd[1794]: time="2024-06-25T16:24:37.836154534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:37.837728 containerd[1794]: time="2024-06-25T16:24:37.837574117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:24:37.840191 containerd[1794]: time="2024-06-25T16:24:37.840013387Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:37.846959 containerd[1794]: time="2024-06-25T16:24:37.846520126Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:37.854210 containerd[1794]: time="2024-06-25T16:24:37.854163357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:37.855461 containerd[1794]: time="2024-06-25T16:24:37.855410055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.985275478s" Jun 25 16:24:37.857074 containerd[1794]: time="2024-06-25T16:24:37.857038092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:24:37.859816 containerd[1794]: time="2024-06-25T16:24:37.859776076Z" level=info msg="CreateContainer within sandbox \"0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:24:37.886259 containerd[1794]: time="2024-06-25T16:24:37.886125693Z" level=info msg="CreateContainer within sandbox \"0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bba195e3ed7eae260191e7095aa0ba3024823cbeff50a5fe4de223275e6f608f\"" Jun 25 16:24:37.889410 containerd[1794]: time="2024-06-25T16:24:37.887752310Z" level=info msg="StartContainer for \"bba195e3ed7eae260191e7095aa0ba3024823cbeff50a5fe4de223275e6f608f\"" Jun 25 16:24:37.978991 systemd[1]: Started cri-containerd-bba195e3ed7eae260191e7095aa0ba3024823cbeff50a5fe4de223275e6f608f.scope - libcontainer container bba195e3ed7eae260191e7095aa0ba3024823cbeff50a5fe4de223275e6f608f. Jun 25 16:24:38.011000 audit: BPF prog-id=176 op=LOAD Jun 25 16:24:38.015835 kernel: kauditd_printk_skb: 49 callbacks suppressed Jun 25 16:24:38.015979 kernel: audit: type=1334 audit(1719332678.011:616): prog-id=176 op=LOAD Jun 25 16:24:38.011000 audit[5027]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4509 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262613139356533656437656165323630313931653730393561613062 Jun 25 16:24:38.032546 kernel: audit: type=1300 audit(1719332678.011:616): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4509 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.032799 kernel: audit: type=1327 audit(1719332678.011:616): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262613139356533656437656165323630313931653730393561613062 Jun 25 16:24:38.014000 audit: BPF prog-id=177 op=LOAD Jun 25 16:24:38.014000 audit[5027]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4509 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.044601 kernel: audit: type=1334 audit(1719332678.014:617): prog-id=177 op=LOAD Jun 25 16:24:38.044705 kernel: audit: type=1300 audit(1719332678.014:617): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4509 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262613139356533656437656165323630313931653730393561613062 Jun 25 16:24:38.047250 kernel: audit: type=1327 audit(1719332678.014:617): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262613139356533656437656165323630313931653730393561613062 Jun 25 16:24:38.048308 kernel: audit: type=1334 audit(1719332678.014:618): prog-id=177 op=UNLOAD Jun 25 16:24:38.014000 audit: BPF prog-id=177 op=UNLOAD Jun 25 16:24:38.014000 audit: BPF prog-id=176 op=UNLOAD Jun 25 16:24:38.051551 kernel: audit: type=1334 audit(1719332678.014:619): prog-id=176 op=UNLOAD Jun 25 16:24:38.051630 kernel: audit: type=1334 audit(1719332678.014:620): prog-id=178 op=LOAD Jun 25 16:24:38.051663 kernel: audit: type=1300 audit(1719332678.014:620): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4509 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.014000 audit: BPF prog-id=178 op=LOAD Jun 25 16:24:38.014000 audit[5027]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4509 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262613139356533656437656165323630313931653730393561613062 Jun 25 16:24:38.070357 containerd[1794]: time="2024-06-25T16:24:38.070275135Z" level=info msg="StartContainer for \"bba195e3ed7eae260191e7095aa0ba3024823cbeff50a5fe4de223275e6f608f\" returns successfully" Jun 25 16:24:38.201602 kubelet[2901]: I0625 16:24:38.201405 2901 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:24:38.201602 kubelet[2901]: I0625 16:24:38.201538 2901 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:24:38.551928 kubelet[2901]: I0625 16:24:38.551893 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-bcwhx" podStartSLOduration=30.201715846 podCreationTimestamp="2024-06-25 16:24:00 +0000 UTC" firstStartedPulling="2024-06-25 16:24:29.507365159 +0000 UTC m=+50.038401491" lastFinishedPulling="2024-06-25 16:24:37.857428208 +0000 UTC m=+58.388464548" observedRunningTime="2024-06-25 16:24:38.550353117 +0000 UTC m=+59.081389459" watchObservedRunningTime="2024-06-25 16:24:38.551778903 +0000 UTC m=+59.082815246" Jun 25 16:24:39.458579 systemd[1]: Started sshd@9-172.31.29.32:22-139.178.89.65:52938.service - OpenSSH per-connection server daemon (139.178.89.65:52938). Jun 25 16:24:39.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.32:22-139.178.89.65:52938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.651000 audit[5058]: USER_ACCT pid=5058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:39.652756 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 52938 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:24:39.654000 audit[5058]: CRED_ACQ pid=5058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:39.654000 audit[5058]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcce32d820 a2=3 a3=7f49af006480 items=0 ppid=1 pid=5058 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:39.654000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:39.656430 sshd[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:39.666230 systemd-logind[1784]: New session 10 of user core. Jun 25 16:24:39.670172 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:24:39.684000 audit[5058]: USER_START pid=5058 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:39.689000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:39.697773 containerd[1794]: time="2024-06-25T16:24:39.697064980Z" level=info msg="StopPodSandbox for \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\"" Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.788 [WARNING][5073] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f5a51c64-0cb9-42e6-90f0-efef0dbd993c", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b", Pod:"coredns-5dd5756b68-8sldg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2853b6aafb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.793 [INFO][5073] k8s.go 608: Cleaning up netns ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.793 [INFO][5073] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" iface="eth0" netns="" Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.793 [INFO][5073] k8s.go 615: Releasing IP address(es) ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.793 [INFO][5073] utils.go 188: Calico CNI releasing IP address ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.850 [INFO][5085] ipam_plugin.go 411: Releasing address using handleID ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" HandleID="k8s-pod-network.5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.851 [INFO][5085] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.851 [INFO][5085] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.859 [WARNING][5085] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" HandleID="k8s-pod-network.5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.859 [INFO][5085] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" HandleID="k8s-pod-network.5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.861 [INFO][5085] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:39.869218 containerd[1794]: 2024-06-25 16:24:39.863 [INFO][5073] k8s.go 621: Teardown processing complete. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:39.872972 containerd[1794]: time="2024-06-25T16:24:39.872925441Z" level=info msg="TearDown network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\" successfully" Jun 25 16:24:39.873286 containerd[1794]: time="2024-06-25T16:24:39.873074886Z" level=info msg="StopPodSandbox for \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\" returns successfully" Jun 25 16:24:39.874463 containerd[1794]: time="2024-06-25T16:24:39.874429931Z" level=info msg="RemovePodSandbox for \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\"" Jun 25 16:24:39.889196 containerd[1794]: time="2024-06-25T16:24:39.874897875Z" level=info msg="Forcibly stopping sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\"" Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:39.956 [WARNING][5106] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f5a51c64-0cb9-42e6-90f0-efef0dbd993c", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"f4ddc6dae7694bcd3cb75a109abebe64d6f1b0882bf393fd130c05b8a643d82b", Pod:"coredns-5dd5756b68-8sldg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2853b6aafb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:39.956 [INFO][5106] k8s.go 608: Cleaning up netns ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:39.956 [INFO][5106] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" iface="eth0" netns="" Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:39.956 [INFO][5106] k8s.go 615: Releasing IP address(es) ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:39.956 [INFO][5106] utils.go 188: Calico CNI releasing IP address ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:40.028 [INFO][5116] ipam_plugin.go 411: Releasing address using handleID ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" HandleID="k8s-pod-network.5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:40.029 [INFO][5116] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:40.029 [INFO][5116] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:40.052 [WARNING][5116] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" HandleID="k8s-pod-network.5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:40.052 [INFO][5116] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" HandleID="k8s-pod-network.5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--8sldg-eth0" Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:40.055 [INFO][5116] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:40.062029 containerd[1794]: 2024-06-25 16:24:40.058 [INFO][5106] k8s.go 621: Teardown processing complete. ContainerID="5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726" Jun 25 16:24:40.063233 containerd[1794]: time="2024-06-25T16:24:40.062075412Z" level=info msg="TearDown network for sandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\" successfully" Jun 25 16:24:40.073609 containerd[1794]: time="2024-06-25T16:24:40.073355709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:40.073609 containerd[1794]: time="2024-06-25T16:24:40.073472883Z" level=info msg="RemovePodSandbox \"5888c8a62eccbe2dd7fd53bb00cd0487efdee3b908eef515c1134e5e86e67726\" returns successfully" Jun 25 16:24:40.074338 containerd[1794]: time="2024-06-25T16:24:40.074305501Z" level=info msg="StopPodSandbox for \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\"" Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.174 [WARNING][5136] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0", GenerateName:"calico-kube-controllers-777566c45b-", Namespace:"calico-system", SelfLink:"", UID:"33705f7f-8163-4c78-a2e8-26b7380a9eca", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777566c45b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d", Pod:"calico-kube-controllers-777566c45b-m5859", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali804ba4f2865", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.174 [INFO][5136] k8s.go 608: Cleaning up netns ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.174 [INFO][5136] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" iface="eth0" netns="" Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.174 [INFO][5136] k8s.go 615: Releasing IP address(es) ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.174 [INFO][5136] utils.go 188: Calico CNI releasing IP address ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.236 [INFO][5142] ipam_plugin.go 411: Releasing address using handleID ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" HandleID="k8s-pod-network.1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.236 [INFO][5142] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.236 [INFO][5142] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.253 [WARNING][5142] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" HandleID="k8s-pod-network.1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.253 [INFO][5142] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" HandleID="k8s-pod-network.1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.256 [INFO][5142] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:40.262857 containerd[1794]: 2024-06-25 16:24:40.260 [INFO][5136] k8s.go 621: Teardown processing complete. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:40.266682 containerd[1794]: time="2024-06-25T16:24:40.264370839Z" level=info msg="TearDown network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\" successfully" Jun 25 16:24:40.266682 containerd[1794]: time="2024-06-25T16:24:40.264434780Z" level=info msg="StopPodSandbox for \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\" returns successfully" Jun 25 16:24:40.266682 containerd[1794]: time="2024-06-25T16:24:40.266085196Z" level=info msg="RemovePodSandbox for \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\"" Jun 25 16:24:40.266682 containerd[1794]: time="2024-06-25T16:24:40.266130532Z" level=info msg="Forcibly stopping sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\"" Jun 25 16:24:40.299741 sshd[5058]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:40.303000 audit[5058]: USER_END pid=5058 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:40.303000 audit[5058]: CRED_DISP pid=5058 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:40.308257 systemd[1]: sshd@9-172.31.29.32:22-139.178.89.65:52938.service: Deactivated successfully. Jun 25 16:24:40.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.32:22-139.178.89.65:52938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.315012 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:24:40.318790 systemd-logind[1784]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:24:40.336600 systemd[1]: Started sshd@10-172.31.29.32:22-139.178.89.65:52946.service - OpenSSH per-connection server daemon (139.178.89.65:52946). Jun 25 16:24:40.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.32:22-139.178.89.65:52946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.338601 systemd-logind[1784]: Removed session 10. Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.379 [WARNING][5162] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0", GenerateName:"calico-kube-controllers-777566c45b-", Namespace:"calico-system", SelfLink:"", UID:"33705f7f-8163-4c78-a2e8-26b7380a9eca", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777566c45b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"971adead75b4d5ba810e3adc08c9bd02822822451dc4f966dd496be111d49c4d", Pod:"calico-kube-controllers-777566c45b-m5859", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali804ba4f2865", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.379 [INFO][5162] k8s.go 608: Cleaning up netns ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.380 [INFO][5162] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" iface="eth0" netns="" Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.380 [INFO][5162] k8s.go 615: Releasing IP address(es) ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.380 [INFO][5162] utils.go 188: Calico CNI releasing IP address ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.420 [INFO][5172] ipam_plugin.go 411: Releasing address using handleID ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" HandleID="k8s-pod-network.1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.421 [INFO][5172] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.421 [INFO][5172] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.428 [WARNING][5172] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" HandleID="k8s-pod-network.1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.428 [INFO][5172] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" HandleID="k8s-pod-network.1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Workload="ip--172--31--29--32-k8s-calico--kube--controllers--777566c45b--m5859-eth0" Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.430 [INFO][5172] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:40.434032 containerd[1794]: 2024-06-25 16:24:40.432 [INFO][5162] k8s.go 621: Teardown processing complete. ContainerID="1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa" Jun 25 16:24:40.435453 containerd[1794]: time="2024-06-25T16:24:40.434078098Z" level=info msg="TearDown network for sandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\" successfully" Jun 25 16:24:40.440294 containerd[1794]: time="2024-06-25T16:24:40.440239731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:40.440448 containerd[1794]: time="2024-06-25T16:24:40.440324498Z" level=info msg="RemovePodSandbox \"1cfa4a11f3af78926deb414ff1ff72aa6e4424da81a90a7a51a26c94a2c2e5fa\" returns successfully" Jun 25 16:24:40.440995 containerd[1794]: time="2024-06-25T16:24:40.440959928Z" level=info msg="StopPodSandbox for \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\"" Jun 25 16:24:40.511000 audit[5169]: USER_ACCT pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:40.515656 sshd[5169]: Accepted publickey for core from 139.178.89.65 port 52946 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:24:40.517000 audit[5169]: CRED_ACQ pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:40.518000 audit[5169]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffde47da70 a2=3 a3=7f2650f2f480 items=0 ppid=1 pid=5169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:40.518000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:40.520911 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:40.533940 systemd-logind[1784]: New session 11 of user core. Jun 25 16:24:40.536161 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:24:40.545000 audit[5169]: USER_START pid=5169 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:40.548000 audit[5204]: CRED_ACQ pid=5204 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.513 [WARNING][5193] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f52a3793-af6f-4a8c-9790-d32a4489299c", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373", Pod:"coredns-5dd5756b68-qd8fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f1dd597bfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.514 [INFO][5193] k8s.go 608: Cleaning up netns ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.514 [INFO][5193] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" iface="eth0" netns="" Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.514 [INFO][5193] k8s.go 615: Releasing IP address(es) ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.515 [INFO][5193] utils.go 188: Calico CNI releasing IP address ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.603 [INFO][5199] ipam_plugin.go 411: Releasing address using handleID ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" HandleID="k8s-pod-network.e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.603 [INFO][5199] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.603 [INFO][5199] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.611 [WARNING][5199] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" HandleID="k8s-pod-network.e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.611 [INFO][5199] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" HandleID="k8s-pod-network.e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.613 [INFO][5199] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:40.617954 containerd[1794]: 2024-06-25 16:24:40.615 [INFO][5193] k8s.go 621: Teardown processing complete. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:40.618945 containerd[1794]: time="2024-06-25T16:24:40.618002659Z" level=info msg="TearDown network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\" successfully" Jun 25 16:24:40.618945 containerd[1794]: time="2024-06-25T16:24:40.618042432Z" level=info msg="StopPodSandbox for \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\" returns successfully" Jun 25 16:24:40.619225 containerd[1794]: time="2024-06-25T16:24:40.619189042Z" level=info msg="RemovePodSandbox for \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\"" Jun 25 16:24:40.619389 containerd[1794]: time="2024-06-25T16:24:40.619227944Z" level=info msg="Forcibly stopping sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\"" Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.696 [WARNING][5220] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f52a3793-af6f-4a8c-9790-d32a4489299c", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"c154bdff6c2736f1867c1f515cf7fc5bf694fdc7c1d2677a3073866b2ddaa373", Pod:"coredns-5dd5756b68-qd8fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f1dd597bfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.696 [INFO][5220] k8s.go 608: Cleaning up netns ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.697 [INFO][5220] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" iface="eth0" netns="" Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.697 [INFO][5220] k8s.go 615: Releasing IP address(es) ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.697 [INFO][5220] utils.go 188: Calico CNI releasing IP address ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.755 [INFO][5232] ipam_plugin.go 411: Releasing address using handleID ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" HandleID="k8s-pod-network.e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.755 [INFO][5232] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.756 [INFO][5232] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.769 [WARNING][5232] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" HandleID="k8s-pod-network.e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.770 [INFO][5232] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" HandleID="k8s-pod-network.e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Workload="ip--172--31--29--32-k8s-coredns--5dd5756b68--qd8fw-eth0" Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.773 [INFO][5232] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:40.784973 containerd[1794]: 2024-06-25 16:24:40.776 [INFO][5220] k8s.go 621: Teardown processing complete. ContainerID="e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01" Jun 25 16:24:40.791164 containerd[1794]: time="2024-06-25T16:24:40.785005342Z" level=info msg="TearDown network for sandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\" successfully" Jun 25 16:24:40.808134 containerd[1794]: time="2024-06-25T16:24:40.807999112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:40.808453 containerd[1794]: time="2024-06-25T16:24:40.808420534Z" level=info msg="RemovePodSandbox \"e418d1492582fba0ca7c1690af4afbba2b71afb6741342933e1dbcd2df287a01\" returns successfully" Jun 25 16:24:40.815914 containerd[1794]: time="2024-06-25T16:24:40.812557158Z" level=info msg="StopPodSandbox for \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\"" Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:40.947 [WARNING][5253] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0", Pod:"csi-node-driver-bcwhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8c406fc5b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:40.948 [INFO][5253] k8s.go 608: Cleaning up netns ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:40.948 [INFO][5253] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" iface="eth0" netns="" Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:40.948 [INFO][5253] k8s.go 615: Releasing IP address(es) ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:40.948 [INFO][5253] utils.go 188: Calico CNI releasing IP address ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:41.000 [INFO][5259] ipam_plugin.go 411: Releasing address using handleID ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" HandleID="k8s-pod-network.3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:41.000 [INFO][5259] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:41.000 [INFO][5259] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:41.009 [WARNING][5259] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" HandleID="k8s-pod-network.3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:41.009 [INFO][5259] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" HandleID="k8s-pod-network.3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:41.011 [INFO][5259] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:41.021906 containerd[1794]: 2024-06-25 16:24:41.015 [INFO][5253] k8s.go 621: Teardown processing complete. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:41.023206 containerd[1794]: time="2024-06-25T16:24:41.021960936Z" level=info msg="TearDown network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\" successfully" Jun 25 16:24:41.023206 containerd[1794]: time="2024-06-25T16:24:41.021998217Z" level=info msg="StopPodSandbox for \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\" returns successfully" Jun 25 16:24:41.023206 containerd[1794]: time="2024-06-25T16:24:41.022505910Z" level=info msg="RemovePodSandbox for \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\"" Jun 25 16:24:41.023206 containerd[1794]: time="2024-06-25T16:24:41.022543204Z" level=info msg="Forcibly stopping sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\"" Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.169 [WARNING][5279] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2bece7e7-c85d-4cea-8dc0-bcb503dd2a60", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"0e6a0b2731fefac6c0c63432177d06ee44642c714ab86224fd8556e78310a2d0", Pod:"csi-node-driver-bcwhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8c406fc5b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.169 [INFO][5279] k8s.go 608: Cleaning up netns ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.169 [INFO][5279] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" iface="eth0" netns="" Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.170 [INFO][5279] k8s.go 615: Releasing IP address(es) ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.170 [INFO][5279] utils.go 188: Calico CNI releasing IP address ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.259 [INFO][5288] ipam_plugin.go 411: Releasing address using handleID ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" HandleID="k8s-pod-network.3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.259 [INFO][5288] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.259 [INFO][5288] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.267 [WARNING][5288] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" HandleID="k8s-pod-network.3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.267 [INFO][5288] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" HandleID="k8s-pod-network.3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Workload="ip--172--31--29--32-k8s-csi--node--driver--bcwhx-eth0" Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.273 [INFO][5288] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:41.279294 containerd[1794]: 2024-06-25 16:24:41.275 [INFO][5279] k8s.go 621: Teardown processing complete. ContainerID="3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326" Jun 25 16:24:41.280040 containerd[1794]: time="2024-06-25T16:24:41.279354645Z" level=info msg="TearDown network for sandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\" successfully" Jun 25 16:24:41.286183 containerd[1794]: time="2024-06-25T16:24:41.286126530Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:41.286423 containerd[1794]: time="2024-06-25T16:24:41.286395133Z" level=info msg="RemovePodSandbox \"3030512ae324e3cc994fbab78f81168a267980d96ef74a9a103d9c1ce3c6c326\" returns successfully" Jun 25 16:24:41.386989 sshd[5169]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:41.391000 audit[5169]: USER_END pid=5169 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:41.392000 audit[5169]: CRED_DISP pid=5169 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:41.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.32:22-139.178.89.65:52946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:41.395050 systemd[1]: sshd@10-172.31.29.32:22-139.178.89.65:52946.service: Deactivated successfully. Jun 25 16:24:41.396130 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:24:41.397029 systemd-logind[1784]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:24:41.398942 systemd-logind[1784]: Removed session 11. Jun 25 16:24:41.424530 systemd[1]: Started sshd@11-172.31.29.32:22-139.178.89.65:52948.service - OpenSSH per-connection server daemon (139.178.89.65:52948). Jun 25 16:24:41.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.32:22-139.178.89.65:52948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:41.626000 audit[5296]: USER_ACCT pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:41.627312 sshd[5296]: Accepted publickey for core from 139.178.89.65 port 52948 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:24:41.628000 audit[5296]: CRED_ACQ pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:41.628000 audit[5296]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8d9de840 a2=3 a3=7f4aaaa76480 items=0 ppid=1 pid=5296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:41.628000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:41.629995 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:41.646509 systemd-logind[1784]: New session 12 of user core. Jun 25 16:24:41.648584 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:24:41.655000 audit[5296]: USER_START pid=5296 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:41.658000 audit[5298]: CRED_ACQ pid=5298 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:41.900781 sshd[5296]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:41.902000 audit[5296]: USER_END pid=5296 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:41.903000 audit[5296]: CRED_DISP pid=5296 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:41.906534 systemd[1]: sshd@11-172.31.29.32:22-139.178.89.65:52948.service: Deactivated successfully. Jun 25 16:24:41.907421 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:24:41.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.32:22-139.178.89.65:52948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:41.908921 systemd-logind[1784]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:24:41.910076 systemd-logind[1784]: Removed session 12. Jun 25 16:24:43.890549 systemd[1]: run-containerd-runc-k8s.io-4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479-runc.zFvaR9.mount: Deactivated successfully. Jun 25 16:24:46.945959 kernel: kauditd_printk_skb: 34 callbacks suppressed Jun 25 16:24:46.946114 kernel: audit: type=1130 audit(1719332686.943:648): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.32:22-139.178.89.65:38052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.32:22-139.178.89.65:38052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.943190 systemd[1]: Started sshd@12-172.31.29.32:22-139.178.89.65:38052.service - OpenSSH per-connection server daemon (139.178.89.65:38052). Jun 25 16:24:47.106000 audit[5339]: USER_ACCT pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.108125 sshd[5339]: Accepted publickey for core from 139.178.89.65 port 38052 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:24:47.112985 kernel: audit: type=1101 audit(1719332687.106:649): pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.112000 audit[5339]: CRED_ACQ pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.115627 sshd[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:47.121760 kernel: audit: type=1103 audit(1719332687.112:650): pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.121922 kernel: audit: type=1006 audit(1719332687.114:651): pid=5339 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 16:24:47.121978 kernel: audit: type=1300 audit(1719332687.114:651): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf85d52d0 a2=3 a3=7f06523e8480 items=0 ppid=1 pid=5339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:47.114000 audit[5339]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf85d52d0 a2=3 a3=7f06523e8480 items=0 ppid=1 pid=5339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:47.114000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:47.124052 kernel: audit: type=1327 audit(1719332687.114:651): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:47.128936 systemd-logind[1784]: New session 13 of user core. Jun 25 16:24:47.133187 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:24:47.140000 audit[5339]: USER_START pid=5339 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.145972 kernel: audit: type=1105 audit(1719332687.140:652): pid=5339 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.145000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.149945 kernel: audit: type=1103 audit(1719332687.145:653): pid=5341 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.383210 sshd[5339]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:47.384000 audit[5339]: USER_END pid=5339 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.391196 kernel: audit: type=1106 audit(1719332687.384:654): pid=5339 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.391331 kernel: audit: type=1104 audit(1719332687.384:655): pid=5339 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.384000 audit[5339]: CRED_DISP pid=5339 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:47.389502 systemd[1]: sshd@12-172.31.29.32:22-139.178.89.65:38052.service: Deactivated successfully. Jun 25 16:24:47.390909 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:24:47.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.32:22-139.178.89.65:38052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:47.392754 systemd-logind[1784]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:24:47.394439 systemd-logind[1784]: Removed session 13. Jun 25 16:24:48.786000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:48.786000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0023aa000 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:24:48.786000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:48.789000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:48.789000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0023aa020 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:24:48.789000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:48.795000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:48.795000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:48.795000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0023aa1c0 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:24:48.795000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:48.795000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002464180 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:24:48.795000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:52.421851 systemd[1]: Started sshd@13-172.31.29.32:22-139.178.89.65:38058.service - OpenSSH per-connection server daemon (139.178.89.65:38058). Jun 25 16:24:52.425897 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:24:52.426001 kernel: audit: type=1130 audit(1719332692.422:661): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.32:22-139.178.89.65:38058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.32:22-139.178.89.65:38058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.595000 audit[5357]: USER_ACCT pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.596715 sshd[5357]: Accepted publickey for core from 139.178.89.65 port 38058 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:24:52.602174 kernel: audit: type=1101 audit(1719332692.595:662): pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.602339 kernel: audit: type=1103 audit(1719332692.597:663): pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.602386 kernel: audit: type=1006 audit(1719332692.598:664): pid=5357 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 16:24:52.597000 audit[5357]: CRED_ACQ pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.599572 sshd[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:52.603842 kernel: audit: type=1300 audit(1719332692.598:664): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff418ef690 a2=3 a3=7f0d539f3480 items=0 ppid=1 pid=5357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:52.598000 audit[5357]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff418ef690 a2=3 a3=7f0d539f3480 items=0 ppid=1 pid=5357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:52.609481 kernel: audit: type=1327 audit(1719332692.598:664): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:52.598000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:52.615229 systemd-logind[1784]: New session 14 of user core. Jun 25 16:24:52.619106 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:24:52.628000 audit[5357]: USER_START pid=5357 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.636034 kernel: audit: type=1105 audit(1719332692.628:665): pid=5357 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.636131 kernel: audit: type=1103 audit(1719332692.633:666): pid=5359 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.633000 audit[5359]: CRED_ACQ pid=5359 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.842245 sshd[5357]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:52.847909 kernel: audit: type=1106 audit(1719332692.843:667): pid=5357 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.843000 audit[5357]: USER_END pid=5357 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.848254 systemd[1]: sshd@13-172.31.29.32:22-139.178.89.65:38058.service: Deactivated successfully. Jun 25 16:24:52.843000 audit[5357]: CRED_DISP pid=5357 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.853962 kernel: audit: type=1104 audit(1719332692.843:668): pid=5357 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:52.852339 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:24:52.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.32:22-139.178.89.65:38058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.855318 systemd-logind[1784]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:24:52.856850 systemd-logind[1784]: Removed session 14. Jun 25 16:24:53.684760 systemd[1]: run-containerd-runc-k8s.io-fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615-runc.zu2Qoz.mount: Deactivated successfully. Jun 25 16:24:56.942332 systemd[1]: run-containerd-runc-k8s.io-4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479-runc.ee7d3W.mount: Deactivated successfully. Jun 25 16:24:57.890287 systemd[1]: Started sshd@14-172.31.29.32:22-139.178.89.65:50908.service - OpenSSH per-connection server daemon (139.178.89.65:50908). Jun 25 16:24:57.896334 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:24:57.896582 kernel: audit: type=1130 audit(1719332697.890:670): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.32:22-139.178.89.65:50908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:57.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.32:22-139.178.89.65:50908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:58.054000 audit[5417]: USER_ACCT pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.054000 audit[5417]: CRED_ACQ pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.059544 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:58.062018 sshd[5417]: Accepted publickey for core from 139.178.89.65 port 50908 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:24:58.062513 kernel: audit: type=1101 audit(1719332698.054:671): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.062591 kernel: audit: type=1103 audit(1719332698.054:672): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.062700 kernel: audit: type=1006 audit(1719332698.054:673): pid=5417 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:24:58.064647 kernel: audit: type=1300 audit(1719332698.054:673): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff45800bd0 a2=3 a3=7fd01cbe4480 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:58.054000 audit[5417]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff45800bd0 a2=3 a3=7fd01cbe4480 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:58.072598 kernel: audit: type=1327 audit(1719332698.054:673): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:58.054000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:58.073612 systemd-logind[1784]: New session 15 of user core. Jun 25 16:24:58.079117 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:24:58.087000 audit[5417]: USER_START pid=5417 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.089000 audit[5419]: CRED_ACQ pid=5419 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.095170 kernel: audit: type=1105 audit(1719332698.087:674): pid=5417 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.095279 kernel: audit: type=1103 audit(1719332698.089:675): pid=5419 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.350398 sshd[5417]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:58.395676 kernel: audit: type=1106 audit(1719332698.366:676): pid=5417 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.395805 kernel: audit: type=1104 audit(1719332698.367:677): pid=5417 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.366000 audit[5417]: USER_END pid=5417 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.367000 audit[5417]: CRED_DISP pid=5417 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:58.382231 systemd-logind[1784]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:24:58.392455 systemd[1]: sshd@14-172.31.29.32:22-139.178.89.65:50908.service: Deactivated successfully. Jun 25 16:24:58.396456 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:24:58.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.32:22-139.178.89.65:50908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:58.398246 systemd-logind[1784]: Removed session 15. Jun 25 16:25:03.404420 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:25:03.404549 kernel: audit: type=1130 audit(1719332703.398:679): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.32:22-139.178.89.65:50918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:03.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.32:22-139.178.89.65:50918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:03.399212 systemd[1]: Started sshd@15-172.31.29.32:22-139.178.89.65:50918.service - OpenSSH per-connection server daemon (139.178.89.65:50918). Jun 25 16:25:03.557000 audit[5430]: USER_ACCT pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.558185 sshd[5430]: Accepted publickey for core from 139.178.89.65 port 50918 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:03.560895 kernel: audit: type=1101 audit(1719332703.557:680): pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.560000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.561969 sshd[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:03.565527 kernel: audit: type=1103 audit(1719332703.560:681): pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.566081 kernel: audit: type=1006 audit(1719332703.560:682): pid=5430 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:25:03.566142 kernel: audit: type=1300 audit(1719332703.560:682): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe17303f80 a2=3 a3=7f39b8947480 items=0 ppid=1 pid=5430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:03.560000 audit[5430]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe17303f80 a2=3 a3=7f39b8947480 items=0 ppid=1 pid=5430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:03.560000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:03.569828 kernel: audit: type=1327 audit(1719332703.560:682): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:03.574946 systemd-logind[1784]: New session 16 of user core. Jun 25 16:25:03.580144 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:25:03.588000 audit[5430]: USER_START pid=5430 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.592000 audit[5432]: CRED_ACQ pid=5432 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.600700 kernel: audit: type=1105 audit(1719332703.588:683): pid=5430 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.600820 kernel: audit: type=1103 audit(1719332703.592:684): pid=5432 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.817328 sshd[5430]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:03.831611 kernel: audit: type=1106 audit(1719332703.818:685): pid=5430 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.818000 audit[5430]: USER_END pid=5430 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.834959 systemd[1]: sshd@15-172.31.29.32:22-139.178.89.65:50918.service: Deactivated successfully. Jun 25 16:25:03.837519 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:25:03.838158 systemd-logind[1784]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:25:03.831000 audit[5430]: CRED_DISP pid=5430 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.843159 kernel: audit: type=1104 audit(1719332703.831:686): pid=5430 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:03.842912 systemd-logind[1784]: Removed session 16. Jun 25 16:25:03.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.32:22-139.178.89.65:50918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:08.864980 systemd[1]: Started sshd@16-172.31.29.32:22-139.178.89.65:58222.service - OpenSSH per-connection server daemon (139.178.89.65:58222). Jun 25 16:25:08.867936 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:25:08.868024 kernel: audit: type=1130 audit(1719332708.864:688): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.32:22-139.178.89.65:58222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:08.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.32:22-139.178.89.65:58222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:09.052000 audit[5453]: USER_ACCT pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.053366 sshd[5453]: Accepted publickey for core from 139.178.89.65 port 58222 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:09.058000 audit[5453]: CRED_ACQ pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.059676 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:09.062066 kernel: audit: type=1101 audit(1719332709.052:689): pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.062186 kernel: audit: type=1103 audit(1719332709.058:690): pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.062288 kernel: audit: type=1006 audit(1719332709.058:691): pid=5453 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:25:09.058000 audit[5453]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffd42ffb0 a2=3 a3=7f6f1e870480 items=0 ppid=1 pid=5453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:09.066898 kernel: audit: type=1300 audit(1719332709.058:691): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffd42ffb0 a2=3 a3=7f6f1e870480 items=0 ppid=1 pid=5453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:09.058000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:09.068930 kernel: audit: type=1327 audit(1719332709.058:691): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:09.073800 systemd-logind[1784]: New session 17 of user core. Jun 25 16:25:09.077133 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:25:09.084000 audit[5453]: USER_START pid=5453 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.087000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.091974 kernel: audit: type=1105 audit(1719332709.084:692): pid=5453 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.092070 kernel: audit: type=1103 audit(1719332709.087:693): pid=5455 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.328533 sshd[5453]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:09.329000 audit[5453]: USER_END pid=5453 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.329000 audit[5453]: CRED_DISP pid=5453 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.334592 systemd[1]: sshd@16-172.31.29.32:22-139.178.89.65:58222.service: Deactivated successfully. Jun 25 16:25:09.335890 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:25:09.337353 kernel: audit: type=1106 audit(1719332709.329:694): pid=5453 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.337416 kernel: audit: type=1104 audit(1719332709.329:695): pid=5453 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.32:22-139.178.89.65:58222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:09.338302 systemd-logind[1784]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:25:09.340046 systemd-logind[1784]: Removed session 17. Jun 25 16:25:09.375656 systemd[1]: Started sshd@17-172.31.29.32:22-139.178.89.65:58228.service - OpenSSH per-connection server daemon (139.178.89.65:58228). Jun 25 16:25:09.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.32:22-139.178.89.65:58228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:09.587000 audit[5465]: USER_ACCT pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.588720 sshd[5465]: Accepted publickey for core from 139.178.89.65 port 58228 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:09.590000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.590000 audit[5465]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd687f83c0 a2=3 a3=7f6fb8334480 items=0 ppid=1 pid=5465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:09.590000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:09.598626 sshd[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:09.620173 systemd-logind[1784]: New session 18 of user core. Jun 25 16:25:09.630930 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:25:09.655000 audit[5465]: USER_START pid=5465 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:09.657000 audit[5467]: CRED_ACQ pid=5467 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:10.454923 sshd[5465]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:10.456000 audit[5465]: USER_END pid=5465 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:10.457000 audit[5465]: CRED_DISP pid=5465 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:10.461904 systemd[1]: sshd@17-172.31.29.32:22-139.178.89.65:58228.service: Deactivated successfully. Jun 25 16:25:10.461957 systemd-logind[1784]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:25:10.463075 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:25:10.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.32:22-139.178.89.65:58228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:10.464216 systemd-logind[1784]: Removed session 18. Jun 25 16:25:10.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.32:22-139.178.89.65:58240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:10.488443 systemd[1]: Started sshd@18-172.31.29.32:22-139.178.89.65:58240.service - OpenSSH per-connection server daemon (139.178.89.65:58240). Jun 25 16:25:10.663000 audit[5475]: USER_ACCT pid=5475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:10.664524 sshd[5475]: Accepted publickey for core from 139.178.89.65 port 58240 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:10.665000 audit[5475]: CRED_ACQ pid=5475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:10.666000 audit[5475]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe10710920 a2=3 a3=7efe891b4480 items=0 ppid=1 pid=5475 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.666000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:10.669722 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:10.681768 systemd-logind[1784]: New session 19 of user core. Jun 25 16:25:10.686339 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:25:10.731000 audit[5475]: USER_START pid=5475 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:10.734000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:12.174000 audit[5490]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:12.174000 audit[5490]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc993de520 a2=0 a3=7ffc993de50c items=0 ppid=3254 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:12.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:12.176000 audit[5490]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:12.176000 audit[5490]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc993de520 a2=0 a3=0 items=0 ppid=3254 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:12.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:12.196000 audit[5492]: NETFILTER_CFG table=filter:115 family=2 entries=32 op=nft_register_rule pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:12.196000 audit[5492]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffdf6ad7970 a2=0 a3=7ffdf6ad795c items=0 ppid=3254 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:12.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:12.198000 audit[5492]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:12.198000 audit[5492]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdf6ad7970 a2=0 a3=0 items=0 ppid=3254 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:12.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:12.216772 sshd[5475]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:12.221000 audit[5475]: USER_END pid=5475 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:12.221000 audit[5475]: CRED_DISP pid=5475 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:12.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.32:22-139.178.89.65:58240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:12.225575 systemd[1]: sshd@18-172.31.29.32:22-139.178.89.65:58240.service: Deactivated successfully. Jun 25 16:25:12.226550 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:25:12.227447 systemd-logind[1784]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:25:12.229512 systemd-logind[1784]: Removed session 19. Jun 25 16:25:12.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.32:22-139.178.89.65:58254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:12.248003 systemd[1]: Started sshd@19-172.31.29.32:22-139.178.89.65:58254.service - OpenSSH per-connection server daemon (139.178.89.65:58254). Jun 25 16:25:12.401000 audit[5495]: USER_ACCT pid=5495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:12.402469 sshd[5495]: Accepted publickey for core from 139.178.89.65 port 58254 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:12.402000 audit[5495]: CRED_ACQ pid=5495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:12.402000 audit[5495]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeafe658c0 a2=3 a3=7f629d2f1480 items=0 ppid=1 pid=5495 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:12.402000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:12.404265 sshd[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:12.416002 systemd-logind[1784]: New session 20 of user core. Jun 25 16:25:12.422166 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:25:12.433000 audit[5495]: USER_START pid=5495 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:12.436000 audit[5497]: CRED_ACQ pid=5497 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:13.532654 sshd[5495]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:13.534000 audit[5495]: USER_END pid=5495 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:13.535000 audit[5495]: CRED_DISP pid=5495 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:13.538152 systemd-logind[1784]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:25:13.540583 systemd[1]: sshd@19-172.31.29.32:22-139.178.89.65:58254.service: Deactivated successfully. Jun 25 16:25:13.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.32:22-139.178.89.65:58254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:13.541679 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:25:13.543643 systemd-logind[1784]: Removed session 20. Jun 25 16:25:13.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.32:22-139.178.89.65:58264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:13.563282 systemd[1]: Started sshd@20-172.31.29.32:22-139.178.89.65:58264.service - OpenSSH per-connection server daemon (139.178.89.65:58264). Jun 25 16:25:13.752000 audit[5505]: USER_ACCT pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:13.753233 sshd[5505]: Accepted publickey for core from 139.178.89.65 port 58264 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:13.753000 audit[5505]: CRED_ACQ pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:13.753000 audit[5505]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff30368810 a2=3 a3=7fb22a1ba480 items=0 ppid=1 pid=5505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:13.753000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:13.755533 sshd[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:13.763110 systemd-logind[1784]: New session 21 of user core. Jun 25 16:25:13.767104 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:25:13.774000 audit[5505]: USER_START pid=5505 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:13.776000 audit[5507]: CRED_ACQ pid=5507 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:13.926785 systemd[1]: run-containerd-runc-k8s.io-4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479-runc.V8l93b.mount: Deactivated successfully. Jun 25 16:25:14.052461 sshd[5505]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:14.059310 kernel: kauditd_printk_skb: 54 callbacks suppressed Jun 25 16:25:14.059449 kernel: audit: type=1106 audit(1719332714.054:734): pid=5505 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:14.054000 audit[5505]: USER_END pid=5505 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:14.057922 systemd[1]: sshd@20-172.31.29.32:22-139.178.89.65:58264.service: Deactivated successfully. Jun 25 16:25:14.059027 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:25:14.054000 audit[5505]: CRED_DISP pid=5505 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:14.060929 systemd-logind[1784]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:25:14.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.32:22-139.178.89.65:58264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:14.065727 kernel: audit: type=1104 audit(1719332714.054:735): pid=5505 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:14.065797 kernel: audit: type=1131 audit(1719332714.057:736): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.32:22-139.178.89.65:58264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:14.066385 systemd-logind[1784]: Removed session 21. Jun 25 16:25:19.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.32:22-139.178.89.65:36384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:19.106586 systemd[1]: Started sshd@21-172.31.29.32:22-139.178.89.65:36384.service - OpenSSH per-connection server daemon (139.178.89.65:36384). Jun 25 16:25:19.115579 kernel: audit: type=1130 audit(1719332719.106:737): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.32:22-139.178.89.65:36384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:19.297000 audit[5539]: USER_ACCT pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.298655 sshd[5539]: Accepted publickey for core from 139.178.89.65 port 36384 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:19.302117 kernel: audit: type=1101 audit(1719332719.297:738): pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.302795 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:19.301000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.309458 kernel: audit: type=1103 audit(1719332719.301:739): pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.309731 kernel: audit: type=1006 audit(1719332719.301:740): pid=5539 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 16:25:19.301000 audit[5539]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecedc72e0 a2=3 a3=7fb153e09480 items=0 ppid=1 pid=5539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:19.316913 kernel: audit: type=1300 audit(1719332719.301:740): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecedc72e0 a2=3 a3=7fb153e09480 items=0 ppid=1 pid=5539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:19.301000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:19.323038 kernel: audit: type=1327 audit(1719332719.301:740): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:19.323310 systemd-logind[1784]: New session 22 of user core. Jun 25 16:25:19.330133 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:25:19.342000 audit[5539]: USER_START pid=5539 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.346896 kernel: audit: type=1105 audit(1719332719.342:741): pid=5539 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.346000 audit[5541]: CRED_ACQ pid=5541 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.350969 kernel: audit: type=1103 audit(1719332719.346:742): pid=5541 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.632028 sshd[5539]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:19.633000 audit[5539]: USER_END pid=5539 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.633000 audit[5539]: CRED_DISP pid=5539 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.642901 kernel: audit: type=1106 audit(1719332719.633:743): pid=5539 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.643227 kernel: audit: type=1104 audit(1719332719.633:744): pid=5539 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:19.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.32:22-139.178.89.65:36384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:19.641601 systemd[1]: sshd@21-172.31.29.32:22-139.178.89.65:36384.service: Deactivated successfully. Jun 25 16:25:19.643885 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:25:19.645543 systemd-logind[1784]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:25:19.652578 systemd-logind[1784]: Removed session 22. Jun 25 16:25:20.249541 kubelet[2901]: I0625 16:25:20.249488 2901 topology_manager.go:215] "Topology Admit Handler" podUID="914ba38c-3637-46e8-80ca-3ca29d529086" podNamespace="calico-apiserver" podName="calico-apiserver-67f5b6b848-w46sj" Jun 25 16:25:20.285948 systemd[1]: Created slice kubepods-besteffort-pod914ba38c_3637_46e8_80ca_3ca29d529086.slice - libcontainer container kubepods-besteffort-pod914ba38c_3637_46e8_80ca_3ca29d529086.slice. Jun 25 16:25:20.325000 audit[5551]: NETFILTER_CFG table=filter:117 family=2 entries=33 op=nft_register_rule pid=5551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:20.325000 audit[5551]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffd304a0430 a2=0 a3=7ffd304a041c items=0 ppid=3254 pid=5551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:20.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:20.326000 audit[5551]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=5551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:20.326000 audit[5551]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd304a0430 a2=0 a3=0 items=0 ppid=3254 pid=5551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:20.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:20.369340 kubelet[2901]: I0625 16:25:20.369288 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/914ba38c-3637-46e8-80ca-3ca29d529086-calico-apiserver-certs\") pod \"calico-apiserver-67f5b6b848-w46sj\" (UID: \"914ba38c-3637-46e8-80ca-3ca29d529086\") " pod="calico-apiserver/calico-apiserver-67f5b6b848-w46sj" Jun 25 16:25:20.384438 kubelet[2901]: I0625 16:25:20.384394 2901 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lmk6\" (UniqueName: \"kubernetes.io/projected/914ba38c-3637-46e8-80ca-3ca29d529086-kube-api-access-5lmk6\") pod \"calico-apiserver-67f5b6b848-w46sj\" (UID: \"914ba38c-3637-46e8-80ca-3ca29d529086\") " pod="calico-apiserver/calico-apiserver-67f5b6b848-w46sj" Jun 25 16:25:20.478000 audit[5553]: NETFILTER_CFG table=filter:119 family=2 entries=34 op=nft_register_rule pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:20.478000 audit[5553]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffed1f7c910 a2=0 a3=7ffed1f7c8fc items=0 ppid=3254 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:20.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:20.480000 audit[5553]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:20.480000 audit[5553]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffed1f7c910 a2=0 a3=0 items=0 ppid=3254 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:20.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:20.489539 kubelet[2901]: E0625 16:25:20.489495 2901 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:25:20.508643 kubelet[2901]: E0625 16:25:20.508520 2901 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/914ba38c-3637-46e8-80ca-3ca29d529086-calico-apiserver-certs podName:914ba38c-3637-46e8-80ca-3ca29d529086 nodeName:}" failed. No retries permitted until 2024-06-25 16:25:20.98961278 +0000 UTC m=+101.520649113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/914ba38c-3637-46e8-80ca-3ca29d529086-calico-apiserver-certs") pod "calico-apiserver-67f5b6b848-w46sj" (UID: "914ba38c-3637-46e8-80ca-3ca29d529086") : secret "calico-apiserver-certs" not found Jun 25 16:25:21.199385 containerd[1794]: time="2024-06-25T16:25:21.199327765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f5b6b848-w46sj,Uid:914ba38c-3637-46e8-80ca-3ca29d529086,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:25:21.560637 systemd-networkd[1527]: caliaa2a72fcc23: Link UP Jun 25 16:25:21.564238 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:25:21.564355 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaa2a72fcc23: link becomes ready Jun 25 16:25:21.563894 systemd-networkd[1527]: caliaa2a72fcc23: Gained carrier Jun 25 16:25:21.565950 (udev-worker)[5574]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.373 [INFO][5556] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0 calico-apiserver-67f5b6b848- calico-apiserver 914ba38c-3637-46e8-80ca-3ca29d529086 1098 0 2024-06-25 16:25:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f5b6b848 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-32 calico-apiserver-67f5b6b848-w46sj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa2a72fcc23 [] []}} ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Namespace="calico-apiserver" Pod="calico-apiserver-67f5b6b848-w46sj" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.373 [INFO][5556] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Namespace="calico-apiserver" Pod="calico-apiserver-67f5b6b848-w46sj" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.448 [INFO][5568] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" HandleID="k8s-pod-network.093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Workload="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.469 [INFO][5568] ipam_plugin.go 264: Auto assigning IP ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" HandleID="k8s-pod-network.093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Workload="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002911e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-32", "pod":"calico-apiserver-67f5b6b848-w46sj", "timestamp":"2024-06-25 16:25:21.448820741 +0000 UTC"}, Hostname:"ip-172-31-29-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.469 [INFO][5568] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.469 [INFO][5568] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.469 [INFO][5568] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-32' Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.475 [INFO][5568] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" host="ip-172-31-29-32" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.500 [INFO][5568] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-32" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.515 [INFO][5568] ipam.go 489: Trying affinity for 192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.519 [INFO][5568] ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.523 [INFO][5568] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ip-172-31-29-32" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.524 [INFO][5568] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" host="ip-172-31-29-32" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.526 [INFO][5568] ipam.go 1685: Creating new handle: k8s-pod-network.093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49 Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.532 [INFO][5568] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" host="ip-172-31-29-32" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.550 [INFO][5568] ipam.go 1216: Successfully claimed IPs: [192.168.74.197/26] block=192.168.74.192/26 handle="k8s-pod-network.093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" host="ip-172-31-29-32" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.550 [INFO][5568] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.197/26] handle="k8s-pod-network.093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" host="ip-172-31-29-32" Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.550 [INFO][5568] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:21.595384 containerd[1794]: 2024-06-25 16:25:21.550 [INFO][5568] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.74.197/26] IPv6=[] ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" HandleID="k8s-pod-network.093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Workload="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" Jun 25 16:25:21.596900 containerd[1794]: 2024-06-25 16:25:21.555 [INFO][5556] k8s.go 386: Populated endpoint ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Namespace="calico-apiserver" Pod="calico-apiserver-67f5b6b848-w46sj" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0", GenerateName:"calico-apiserver-67f5b6b848-", Namespace:"calico-apiserver", SelfLink:"", UID:"914ba38c-3637-46e8-80ca-3ca29d529086", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f5b6b848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"", Pod:"calico-apiserver-67f5b6b848-w46sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa2a72fcc23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:21.596900 containerd[1794]: 2024-06-25 16:25:21.555 [INFO][5556] k8s.go 387: Calico CNI using IPs: [192.168.74.197/32] ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Namespace="calico-apiserver" Pod="calico-apiserver-67f5b6b848-w46sj" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" Jun 25 16:25:21.596900 containerd[1794]: 2024-06-25 16:25:21.555 [INFO][5556] dataplane_linux.go 68: Setting the host side veth name to caliaa2a72fcc23 ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Namespace="calico-apiserver" Pod="calico-apiserver-67f5b6b848-w46sj" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" Jun 25 16:25:21.596900 containerd[1794]: 2024-06-25 16:25:21.564 [INFO][5556] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Namespace="calico-apiserver" Pod="calico-apiserver-67f5b6b848-w46sj" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" Jun 25 16:25:21.596900 containerd[1794]: 2024-06-25 16:25:21.565 [INFO][5556] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Namespace="calico-apiserver" Pod="calico-apiserver-67f5b6b848-w46sj" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0", GenerateName:"calico-apiserver-67f5b6b848-", Namespace:"calico-apiserver", SelfLink:"", UID:"914ba38c-3637-46e8-80ca-3ca29d529086", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f5b6b848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-32", ContainerID:"093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49", Pod:"calico-apiserver-67f5b6b848-w46sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa2a72fcc23", MAC:"ee:66:59:c6:ed:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:21.596900 containerd[1794]: 2024-06-25 16:25:21.581 [INFO][5556] k8s.go 500: Wrote updated endpoint to datastore ContainerID="093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49" Namespace="calico-apiserver" Pod="calico-apiserver-67f5b6b848-w46sj" WorkloadEndpoint="ip--172--31--29--32-k8s-calico--apiserver--67f5b6b848--w46sj-eth0" Jun 25 16:25:21.650478 containerd[1794]: time="2024-06-25T16:25:21.650366377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:21.650671 containerd[1794]: time="2024-06-25T16:25:21.650558254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:21.650671 containerd[1794]: time="2024-06-25T16:25:21.650620408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:21.650780 containerd[1794]: time="2024-06-25T16:25:21.650680929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:21.706162 systemd[1]: run-containerd-runc-k8s.io-093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49-runc.WQuCg6.mount: Deactivated successfully. Jun 25 16:25:21.782429 systemd[1]: Started cri-containerd-093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49.scope - libcontainer container 093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49. Jun 25 16:25:21.784000 audit[5619]: NETFILTER_CFG table=filter:121 family=2 entries=51 op=nft_register_chain pid=5619 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:21.784000 audit[5619]: SYSCALL arch=c000003e syscall=46 success=yes exit=26260 a0=3 a1=7ffe0fdb70a0 a2=0 a3=7ffe0fdb708c items=0 ppid=4145 pid=5619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:21.784000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:21.847000 audit: BPF prog-id=179 op=LOAD Jun 25 16:25:21.848000 audit: BPF prog-id=180 op=LOAD Jun 25 16:25:21.848000 audit[5609]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=5595 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:21.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333633363031306366323962336330343132383638613435343565 Jun 25 16:25:21.848000 audit: BPF prog-id=181 op=LOAD Jun 25 16:25:21.848000 audit[5609]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=5595 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:21.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333633363031306366323962336330343132383638613435343565 Jun 25 16:25:21.848000 audit: BPF prog-id=181 op=UNLOAD Jun 25 16:25:21.848000 audit: BPF prog-id=180 op=UNLOAD Jun 25 16:25:21.848000 audit: BPF prog-id=182 op=LOAD Jun 25 16:25:21.848000 audit[5609]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=5595 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:21.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333633363031306366323962336330343132383638613435343565 Jun 25 16:25:21.905182 containerd[1794]: time="2024-06-25T16:25:21.905140438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f5b6b848-w46sj,Uid:914ba38c-3637-46e8-80ca-3ca29d529086,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49\"" Jun 25 16:25:21.908391 containerd[1794]: time="2024-06-25T16:25:21.908333020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:25:23.108066 systemd-networkd[1527]: caliaa2a72fcc23: Gained IPv6LL Jun 25 16:25:23.446000 audit[5633]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=5633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:23.446000 audit[5633]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd9d4f71e0 a2=0 a3=7ffd9d4f71cc items=0 ppid=3254 pid=5633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:23.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:23.449000 audit[5633]: NETFILTER_CFG table=nat:123 family=2 entries=104 op=nft_register_chain pid=5633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:23.449000 audit[5633]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd9d4f71e0 a2=0 a3=7ffd9d4f71cc items=0 ppid=3254 pid=5633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:23.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:23.668471 systemd[1]: run-containerd-runc-k8s.io-fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615-runc.MsCu6R.mount: Deactivated successfully. Jun 25 16:25:24.676393 systemd[1]: Started sshd@22-172.31.29.32:22-139.178.89.65:36386.service - OpenSSH per-connection server daemon (139.178.89.65:36386). Jun 25 16:25:24.682408 kernel: kauditd_printk_skb: 34 callbacks suppressed Jun 25 16:25:24.682536 kernel: audit: type=1130 audit(1719332724.676:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.32:22-139.178.89.65:36386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:24.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.32:22-139.178.89.65:36386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:25.175000 audit[5659]: USER_ACCT pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.176000 audit[5659]: CRED_ACQ pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.179620 sshd[5659]: Accepted publickey for core from 139.178.89.65 port 36386 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:25.180550 sshd[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:25.183528 kernel: audit: type=1101 audit(1719332725.175:760): pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.183627 kernel: audit: type=1103 audit(1719332725.176:761): pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.185244 kernel: audit: type=1006 audit(1719332725.176:762): pid=5659 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:25:25.176000 audit[5659]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff91c23a30 a2=3 a3=7f35cb997480 items=0 ppid=1 pid=5659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:25.190460 kernel: audit: type=1300 audit(1719332725.176:762): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff91c23a30 a2=3 a3=7f35cb997480 items=0 ppid=1 pid=5659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:25.176000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:25.191661 kernel: audit: type=1327 audit(1719332725.176:762): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:25.200030 systemd-logind[1784]: New session 23 of user core. Jun 25 16:25:25.206341 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:25:25.213000 audit[5659]: USER_START pid=5659 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.213000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.220623 kernel: audit: type=1105 audit(1719332725.213:763): pid=5659 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.220780 kernel: audit: type=1103 audit(1719332725.213:764): pid=5663 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.682774 sshd[5659]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:25.691549 kernel: audit: type=1106 audit(1719332725.683:765): pid=5659 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.692202 kernel: audit: type=1104 audit(1719332725.683:766): pid=5659 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.683000 audit[5659]: USER_END pid=5659 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.683000 audit[5659]: CRED_DISP pid=5659 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:25.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.32:22-139.178.89.65:36386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:25.687705 systemd[1]: sshd@22-172.31.29.32:22-139.178.89.65:36386.service: Deactivated successfully. Jun 25 16:25:25.689309 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:25:25.691503 systemd-logind[1784]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:25:25.693450 systemd-logind[1784]: Removed session 23. Jun 25 16:25:26.481000 audit[5677]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:26.481000 audit[5677]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd47b881b0 a2=0 a3=7ffd47b8819c items=0 ppid=3254 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:26.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:26.488000 audit[5677]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:26.488000 audit[5677]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffd47b881b0 a2=0 a3=7ffd47b8819c items=0 ppid=3254 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:26.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:26.859104 containerd[1794]: time="2024-06-25T16:25:26.858974244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:26.860765 containerd[1794]: time="2024-06-25T16:25:26.860706190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:25:26.862641 containerd[1794]: time="2024-06-25T16:25:26.862600016Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:26.865064 containerd[1794]: time="2024-06-25T16:25:26.865027221Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:26.867543 containerd[1794]: time="2024-06-25T16:25:26.867489216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:26.868342 containerd[1794]: time="2024-06-25T16:25:26.868303204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.959702152s" Jun 25 16:25:26.868488 containerd[1794]: time="2024-06-25T16:25:26.868462704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:25:26.884780 containerd[1794]: time="2024-06-25T16:25:26.884734639Z" level=info msg="CreateContainer within sandbox \"093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:25:26.912480 containerd[1794]: time="2024-06-25T16:25:26.912435917Z" level=info msg="CreateContainer within sandbox \"093636010cf29b3c0412868a4545efcb75bfa97417106a11de9f758a892d4d49\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6ab35ca291439aa47ede9ebac475ca61c60f2db537c4b9cbb65e67c012f1832a\"" Jun 25 16:25:26.920639 containerd[1794]: time="2024-06-25T16:25:26.919269717Z" level=info msg="StartContainer for \"6ab35ca291439aa47ede9ebac475ca61c60f2db537c4b9cbb65e67c012f1832a\"" Jun 25 16:25:26.975119 systemd[1]: Started cri-containerd-6ab35ca291439aa47ede9ebac475ca61c60f2db537c4b9cbb65e67c012f1832a.scope - libcontainer container 6ab35ca291439aa47ede9ebac475ca61c60f2db537c4b9cbb65e67c012f1832a. Jun 25 16:25:26.989000 audit: BPF prog-id=183 op=LOAD Jun 25 16:25:26.990000 audit: BPF prog-id=184 op=LOAD Jun 25 16:25:26.990000 audit[5692]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5595 pid=5692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:26.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623335636132393134333961613437656465396562616334373563 Jun 25 16:25:26.990000 audit: BPF prog-id=185 op=LOAD Jun 25 16:25:26.990000 audit[5692]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5595 pid=5692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:26.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623335636132393134333961613437656465396562616334373563 Jun 25 16:25:26.990000 audit: BPF prog-id=185 op=UNLOAD Jun 25 16:25:26.990000 audit: BPF prog-id=184 op=UNLOAD Jun 25 16:25:26.990000 audit: BPF prog-id=186 op=LOAD Jun 25 16:25:26.990000 audit[5692]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5595 pid=5692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:26.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623335636132393134333961613437656465396562616334373563 Jun 25 16:25:27.033680 containerd[1794]: time="2024-06-25T16:25:27.033605314Z" level=info msg="StartContainer for \"6ab35ca291439aa47ede9ebac475ca61c60f2db537c4b9cbb65e67c012f1832a\" returns successfully" Jun 25 16:25:27.729832 kubelet[2901]: I0625 16:25:27.729796 2901 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67f5b6b848-w46sj" podStartSLOduration=2.768914454 podCreationTimestamp="2024-06-25 16:25:20 +0000 UTC" firstStartedPulling="2024-06-25 16:25:21.907985612 +0000 UTC m=+102.439021931" lastFinishedPulling="2024-06-25 16:25:26.86881237 +0000 UTC m=+107.399848690" observedRunningTime="2024-06-25 16:25:27.729230883 +0000 UTC m=+108.260267225" watchObservedRunningTime="2024-06-25 16:25:27.729741213 +0000 UTC m=+108.260777577" Jun 25 16:25:27.767000 audit[5728]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=5728 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:27.767000 audit[5728]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff97c4ef10 a2=0 a3=7fff97c4eefc items=0 ppid=3254 pid=5728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:27.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:27.770000 audit[5728]: NETFILTER_CFG table=nat:127 family=2 entries=44 op=nft_register_rule pid=5728 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:27.770000 audit[5728]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7fff97c4ef10 a2=0 a3=7fff97c4eefc items=0 ppid=3254 pid=5728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:27.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:28.516000 audit[5730]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=5730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:28.516000 audit[5730]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdd4b52d80 a2=0 a3=7ffdd4b52d6c items=0 ppid=3254 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:28.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:28.519000 audit[5730]: NETFILTER_CFG table=nat:129 family=2 entries=51 op=nft_register_chain pid=5730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:28.519000 audit[5730]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffdd4b52d80 a2=0 a3=7ffdd4b52d6c items=0 ppid=3254 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:28.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:30.734457 systemd[1]: Started sshd@23-172.31.29.32:22-139.178.89.65:41412.service - OpenSSH per-connection server daemon (139.178.89.65:41412). Jun 25 16:25:30.736205 kernel: kauditd_printk_skb: 31 callbacks suppressed Jun 25 16:25:30.736293 kernel: audit: type=1130 audit(1719332730.733:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.32:22-139.178.89.65:41412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:30.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.32:22-139.178.89.65:41412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:30.957000 audit[5732]: USER_ACCT pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:30.961544 sshd[5732]: Accepted publickey for core from 139.178.89.65 port 41412 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:30.962152 kernel: audit: type=1101 audit(1719332730.957:781): pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:30.961000 audit[5732]: CRED_ACQ pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:30.964653 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:30.966936 kernel: audit: type=1103 audit(1719332730.961:782): pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:30.967073 kernel: audit: type=1006 audit(1719332730.961:783): pid=5732 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:25:30.967116 kernel: audit: type=1300 audit(1719332730.961:783): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdf14c8b0 a2=3 a3=7f72001ce480 items=0 ppid=1 pid=5732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.961000 audit[5732]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdf14c8b0 a2=3 a3=7f72001ce480 items=0 ppid=1 pid=5732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.961000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:30.972807 kernel: audit: type=1327 audit(1719332730.961:783): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:30.973925 systemd-logind[1784]: New session 24 of user core. Jun 25 16:25:30.979087 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:25:30.985000 audit[5732]: USER_START pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:30.992729 kernel: audit: type=1105 audit(1719332730.985:784): pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:30.992854 kernel: audit: type=1103 audit(1719332730.991:785): pid=5734 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:30.991000 audit[5734]: CRED_ACQ pid=5734 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:31.570117 sshd[5732]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:31.575000 audit[5732]: USER_END pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:31.575000 audit[5732]: CRED_DISP pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:31.586427 kernel: audit: type=1106 audit(1719332731.575:786): pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:31.586658 kernel: audit: type=1104 audit(1719332731.575:787): pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:31.589146 systemd[1]: sshd@23-172.31.29.32:22-139.178.89.65:41412.service: Deactivated successfully. Jun 25 16:25:31.590224 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:25:31.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.32:22-139.178.89.65:41412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:31.591709 systemd-logind[1784]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:25:31.593082 systemd-logind[1784]: Removed session 24. Jun 25 16:25:33.180000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:33.180000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001fd05d0 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:25:33.180000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:33.181000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:33.181000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002464f40 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:25:33.181000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:35.501000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6307 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:35.501000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c00c644360 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:25:35.501000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:25:35.502000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:35.502000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c002b9e380 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:25:35.502000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:25:35.503000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6322 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:35.503000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c00c6443c0 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:25:35.503000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:25:35.508000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:35.508000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c00c644690 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:25:35.508000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:25:35.525000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:35.525000 audit[2777]: AVC avc: denied { watch } for pid=2777 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c99,c713 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:35.525000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=78 a1=c00c664690 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:25:35.525000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:25:35.525000 audit[2777]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c002b9fac0 a2=fc6 a3=0 items=0 ppid=2626 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c99,c713 key=(null) Jun 25 16:25:35.525000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E32392E3332002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 16:25:36.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.32:22-139.178.89.65:55844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:36.610662 systemd[1]: Started sshd@24-172.31.29.32:22-139.178.89.65:55844.service - OpenSSH per-connection server daemon (139.178.89.65:55844). Jun 25 16:25:36.612358 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:25:36.612424 kernel: audit: type=1130 audit(1719332736.610:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.32:22-139.178.89.65:55844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:36.802350 sshd[5747]: Accepted publickey for core from 139.178.89.65 port 55844 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:36.801000 audit[5747]: USER_ACCT pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:36.803000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:36.805985 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:36.808232 kernel: audit: type=1101 audit(1719332736.801:798): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:36.808329 kernel: audit: type=1103 audit(1719332736.803:799): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:36.803000 audit[5747]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe23ee5dc0 a2=3 a3=7fc535d11480 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.814506 kernel: audit: type=1006 audit(1719332736.803:800): pid=5747 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:25:36.814615 kernel: audit: type=1300 audit(1719332736.803:800): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe23ee5dc0 a2=3 a3=7fc535d11480 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.815635 kernel: audit: type=1327 audit(1719332736.803:800): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:36.803000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:36.822499 systemd-logind[1784]: New session 25 of user core. Jun 25 16:25:36.828298 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:25:36.836000 audit[5747]: USER_START pid=5747 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:36.839000 audit[5749]: CRED_ACQ pid=5749 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:36.843755 kernel: audit: type=1105 audit(1719332736.836:801): pid=5747 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:36.843860 kernel: audit: type=1103 audit(1719332736.839:802): pid=5749 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:37.221688 sshd[5747]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:37.222000 audit[5747]: USER_END pid=5747 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:37.226895 kernel: audit: type=1106 audit(1719332737.222:803): pid=5747 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:37.227017 kernel: audit: type=1104 audit(1719332737.225:804): pid=5747 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:37.225000 audit[5747]: CRED_DISP pid=5747 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:37.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.32:22-139.178.89.65:55844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:37.232002 systemd-logind[1784]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:25:37.232235 systemd[1]: sshd@24-172.31.29.32:22-139.178.89.65:55844.service: Deactivated successfully. Jun 25 16:25:37.233323 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:25:37.236643 systemd-logind[1784]: Removed session 25. Jun 25 16:25:42.262067 systemd[1]: Started sshd@25-172.31.29.32:22-139.178.89.65:55846.service - OpenSSH per-connection server daemon (139.178.89.65:55846). Jun 25 16:25:42.272377 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:25:42.272546 kernel: audit: type=1130 audit(1719332742.263:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.32:22-139.178.89.65:55846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:42.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.32:22-139.178.89.65:55846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:42.436000 audit[5768]: USER_ACCT pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.441444 sshd[5768]: Accepted publickey for core from 139.178.89.65 port 55846 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:42.444949 kernel: audit: type=1101 audit(1719332742.436:807): pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.445064 kernel: audit: type=1103 audit(1719332742.441:808): pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.441000 audit[5768]: CRED_ACQ pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.443332 sshd[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:42.448907 kernel: audit: type=1006 audit(1719332742.441:809): pid=5768 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 16:25:42.441000 audit[5768]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9a3fda60 a2=3 a3=7ff618c5d480 items=0 ppid=1 pid=5768 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:42.453812 kernel: audit: type=1300 audit(1719332742.441:809): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9a3fda60 a2=3 a3=7ff618c5d480 items=0 ppid=1 pid=5768 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:42.453964 kernel: audit: type=1327 audit(1719332742.441:809): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:42.441000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:42.454708 systemd-logind[1784]: New session 26 of user core. Jun 25 16:25:42.455086 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:25:42.463000 audit[5768]: USER_START pid=5768 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.468025 kernel: audit: type=1105 audit(1719332742.463:810): pid=5768 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.467000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.471094 kernel: audit: type=1103 audit(1719332742.467:811): pid=5770 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.733655 sshd[5768]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:42.742000 audit[5768]: USER_END pid=5768 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.759589 kernel: audit: type=1106 audit(1719332742.742:812): pid=5768 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.769097 kernel: audit: type=1104 audit(1719332742.742:813): pid=5768 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.742000 audit[5768]: CRED_DISP pid=5768 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:42.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.32:22-139.178.89.65:55846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:42.746084 systemd[1]: sshd@25-172.31.29.32:22-139.178.89.65:55846.service: Deactivated successfully. Jun 25 16:25:42.748803 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:25:42.769147 systemd-logind[1784]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:25:42.773429 systemd-logind[1784]: Removed session 26. Jun 25 16:25:43.973824 systemd[1]: run-containerd-runc-k8s.io-4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479-runc.acStLD.mount: Deactivated successfully. Jun 25 16:25:47.770598 systemd[1]: Started sshd@26-172.31.29.32:22-139.178.89.65:57604.service - OpenSSH per-connection server daemon (139.178.89.65:57604). Jun 25 16:25:47.774378 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:25:47.774479 kernel: audit: type=1130 audit(1719332747.771:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.32:22-139.178.89.65:57604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:47.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.32:22-139.178.89.65:57604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:47.952000 audit[5803]: USER_ACCT pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:47.954236 sshd[5803]: Accepted publickey for core from 139.178.89.65 port 57604 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:47.958041 kernel: audit: type=1101 audit(1719332747.952:816): pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:47.957000 audit[5803]: CRED_ACQ pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:47.958777 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:47.963378 kernel: audit: type=1103 audit(1719332747.957:817): pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:47.963471 kernel: audit: type=1006 audit(1719332747.957:818): pid=5803 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 16:25:47.965182 kernel: audit: type=1300 audit(1719332747.957:818): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc09432da0 a2=3 a3=7fab753b2480 items=0 ppid=1 pid=5803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:47.957000 audit[5803]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc09432da0 a2=3 a3=7fab753b2480 items=0 ppid=1 pid=5803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:47.957000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:47.968657 kernel: audit: type=1327 audit(1719332747.957:818): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:47.971084 systemd-logind[1784]: New session 27 of user core. Jun 25 16:25:47.977126 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:25:47.983000 audit[5803]: USER_START pid=5803 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:48.000745 kernel: audit: type=1105 audit(1719332747.983:819): pid=5803 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:48.001105 kernel: audit: type=1103 audit(1719332747.988:820): pid=5805 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:47.988000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:48.424946 sshd[5803]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:48.427000 audit[5803]: USER_END pid=5803 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:48.432171 kernel: audit: type=1106 audit(1719332748.427:821): pid=5803 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:48.432000 audit[5803]: CRED_DISP pid=5803 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:48.440202 kernel: audit: type=1104 audit(1719332748.432:822): pid=5803 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:48.437617 systemd-logind[1784]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:25:48.440796 systemd[1]: sshd@26-172.31.29.32:22-139.178.89.65:57604.service: Deactivated successfully. Jun 25 16:25:48.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.32:22-139.178.89.65:57604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:48.442040 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:25:48.446009 systemd-logind[1784]: Removed session 27. Jun 25 16:25:48.789000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:48.789000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0012ac1e0 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:25:48.789000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:48.796000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:48.796000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0011d9d80 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:25:48.796000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:48.799000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:48.799000 audit[2737]: AVC avc: denied { watch } for pid=2737 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:48.799000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0012ac380 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:25:48.799000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:48.799000 audit[2737]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0011d9f20 a2=fc6 a3=0 items=0 ppid=2605 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:25:48.799000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:53.457733 systemd[1]: Started sshd@27-172.31.29.32:22-139.178.89.65:57614.service - OpenSSH per-connection server daemon (139.178.89.65:57614). Jun 25 16:25:53.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.32:22-139.178.89.65:57614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:53.461784 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:25:53.461907 kernel: audit: type=1130 audit(1719332753.457:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.32:22-139.178.89.65:57614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:53.634000 audit[5820]: USER_ACCT pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.635756 sshd[5820]: Accepted publickey for core from 139.178.89.65 port 57614 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:25:53.637000 audit[5820]: CRED_ACQ pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.640959 kernel: audit: type=1101 audit(1719332753.634:829): pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.641041 kernel: audit: type=1103 audit(1719332753.637:830): pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.641077 kernel: audit: type=1006 audit(1719332753.637:831): pid=5820 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 16:25:53.641661 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:53.637000 audit[5820]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd281f53c0 a2=3 a3=7fc578f8e480 items=0 ppid=1 pid=5820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:53.645190 kernel: audit: type=1300 audit(1719332753.637:831): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd281f53c0 a2=3 a3=7fc578f8e480 items=0 ppid=1 pid=5820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:53.637000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:53.648901 kernel: audit: type=1327 audit(1719332753.637:831): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:53.658076 systemd-logind[1784]: New session 28 of user core. Jun 25 16:25:53.662107 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 16:25:53.683543 systemd[1]: run-containerd-runc-k8s.io-fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615-runc.MYXkfM.mount: Deactivated successfully. Jun 25 16:25:53.699172 kernel: audit: type=1105 audit(1719332753.691:832): pid=5820 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.699312 kernel: audit: type=1103 audit(1719332753.694:833): pid=5837 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.691000 audit[5820]: USER_START pid=5820 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.694000 audit[5837]: CRED_ACQ pid=5837 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.978214 sshd[5820]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:53.993000 audit[5820]: USER_END pid=5820 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.998052 kernel: audit: type=1106 audit(1719332753.993:834): pid=5820 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.997315 systemd-logind[1784]: Session 28 logged out. Waiting for processes to exit. Jun 25 16:25:53.994000 audit[5820]: CRED_DISP pid=5820 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:53.999660 systemd[1]: sshd@27-172.31.29.32:22-139.178.89.65:57614.service: Deactivated successfully. Jun 25 16:25:54.001922 kernel: audit: type=1104 audit(1719332753.994:835): pid=5820 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:54.001020 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 16:25:53.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.32:22-139.178.89.65:57614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:54.003044 systemd-logind[1784]: Removed session 28. Jun 25 16:25:55.153000 audit[5856]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:55.153000 audit[5856]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffefe5cc330 a2=0 a3=7ffefe5cc31c items=0 ppid=3254 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:55.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:55.156000 audit[5856]: NETFILTER_CFG table=nat:131 family=2 entries=58 op=nft_register_chain pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:55.156000 audit[5856]: SYSCALL arch=c000003e syscall=46 success=yes exit=20452 a0=3 a1=7ffefe5cc330 a2=0 a3=7ffefe5cc31c items=0 ppid=3254 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:55.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:56.962065 systemd[1]: run-containerd-runc-k8s.io-4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479-runc.JNsfRI.mount: Deactivated successfully. Jun 25 16:26:07.520173 systemd[1]: cri-containerd-eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e.scope: Deactivated successfully. Jun 25 16:26:07.520555 systemd[1]: cri-containerd-eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e.scope: Consumed 2.642s CPU time. Jun 25 16:26:07.527924 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:26:07.528074 kernel: audit: type=1334 audit(1719332767.523:839): prog-id=76 op=UNLOAD Jun 25 16:26:07.528116 kernel: audit: type=1334 audit(1719332767.523:840): prog-id=96 op=UNLOAD Jun 25 16:26:07.523000 audit: BPF prog-id=76 op=UNLOAD Jun 25 16:26:07.523000 audit: BPF prog-id=96 op=UNLOAD Jun 25 16:26:07.579606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e-rootfs.mount: Deactivated successfully. Jun 25 16:26:07.592127 containerd[1794]: time="2024-06-25T16:26:07.583186534Z" level=info msg="shim disconnected" id=eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e namespace=k8s.io Jun 25 16:26:07.592644 containerd[1794]: time="2024-06-25T16:26:07.592128036Z" level=warning msg="cleaning up after shim disconnected" id=eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e namespace=k8s.io Jun 25 16:26:07.592644 containerd[1794]: time="2024-06-25T16:26:07.592154085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:26:07.912363 kubelet[2901]: I0625 16:26:07.910713 2901 scope.go:117] "RemoveContainer" containerID="eeba96ea3f43cf60229880f73acbd1383da43fa78b8eed5540a9eb96c536293e" Jun 25 16:26:07.922770 containerd[1794]: time="2024-06-25T16:26:07.922707489Z" level=info msg="CreateContainer within sandbox \"0679ec9881465b9cc8829a7e987653f85f2f93fd27d0bf6297830363e9a61144\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 16:26:07.954461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414363900.mount: Deactivated successfully. Jun 25 16:26:08.006334 containerd[1794]: time="2024-06-25T16:26:08.006268899Z" level=info msg="CreateContainer within sandbox \"0679ec9881465b9cc8829a7e987653f85f2f93fd27d0bf6297830363e9a61144\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"58b4f65235ba17dfece30c2477a0a3ed9414c82e67afd45482c43fd15fe49dd0\"" Jun 25 16:26:08.007187 containerd[1794]: time="2024-06-25T16:26:08.006897429Z" level=info msg="StartContainer for \"58b4f65235ba17dfece30c2477a0a3ed9414c82e67afd45482c43fd15fe49dd0\"" Jun 25 16:26:08.044120 systemd[1]: Started cri-containerd-58b4f65235ba17dfece30c2477a0a3ed9414c82e67afd45482c43fd15fe49dd0.scope - libcontainer container 58b4f65235ba17dfece30c2477a0a3ed9414c82e67afd45482c43fd15fe49dd0. Jun 25 16:26:08.060000 audit: BPF prog-id=187 op=LOAD Jun 25 16:26:08.063917 kernel: audit: type=1334 audit(1719332768.060:841): prog-id=187 op=LOAD Jun 25 16:26:08.064145 kernel: audit: type=1334 audit(1719332768.061:842): prog-id=188 op=LOAD Jun 25 16:26:08.064190 kernel: audit: type=1300 audit(1719332768.061:842): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2605 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:08.061000 audit: BPF prog-id=188 op=LOAD Jun 25 16:26:08.061000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2605 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:08.066175 kernel: audit: type=1327 audit(1719332768.061:842): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538623466363532333562613137646665636533306332343737613061 Jun 25 16:26:08.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538623466363532333562613137646665636533306332343737613061 Jun 25 16:26:08.068799 kernel: audit: type=1334 audit(1719332768.061:843): prog-id=189 op=LOAD Jun 25 16:26:08.061000 audit: BPF prog-id=189 op=LOAD Jun 25 16:26:08.069545 kernel: audit: type=1300 audit(1719332768.061:843): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2605 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:08.061000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2605 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:08.072023 kernel: audit: type=1327 audit(1719332768.061:843): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538623466363532333562613137646665636533306332343737613061 Jun 25 16:26:08.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538623466363532333562613137646665636533306332343737613061 Jun 25 16:26:08.075144 kernel: audit: type=1334 audit(1719332768.061:844): prog-id=189 op=UNLOAD Jun 25 16:26:08.061000 audit: BPF prog-id=189 op=UNLOAD Jun 25 16:26:08.061000 audit: BPF prog-id=188 op=UNLOAD Jun 25 16:26:08.061000 audit: BPF prog-id=190 op=LOAD Jun 25 16:26:08.061000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2605 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:08.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538623466363532333562613137646665636533306332343737613061 Jun 25 16:26:08.114620 containerd[1794]: time="2024-06-25T16:26:08.114574114Z" level=info msg="StartContainer for \"58b4f65235ba17dfece30c2477a0a3ed9414c82e67afd45482c43fd15fe49dd0\" returns successfully" Jun 25 16:26:08.309388 systemd[1]: cri-containerd-d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916.scope: Deactivated successfully. Jun 25 16:26:08.309918 systemd[1]: cri-containerd-d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916.scope: Consumed 6.189s CPU time. Jun 25 16:26:08.308000 audit: BPF prog-id=116 op=UNLOAD Jun 25 16:26:08.313000 audit: BPF prog-id=119 op=UNLOAD Jun 25 16:26:08.344056 containerd[1794]: time="2024-06-25T16:26:08.343984716Z" level=info msg="shim disconnected" id=d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916 namespace=k8s.io Jun 25 16:26:08.344056 containerd[1794]: time="2024-06-25T16:26:08.344038086Z" level=warning msg="cleaning up after shim disconnected" id=d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916 namespace=k8s.io Jun 25 16:26:08.344056 containerd[1794]: time="2024-06-25T16:26:08.344051314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:26:08.580712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916-rootfs.mount: Deactivated successfully. Jun 25 16:26:08.920365 kubelet[2901]: I0625 16:26:08.920207 2901 scope.go:117] "RemoveContainer" containerID="d97d15464fb33cfec01507e7ae0e94fc33e2370ac6c7a23f96e21a2ab3b65916" Jun 25 16:26:08.933561 containerd[1794]: time="2024-06-25T16:26:08.933514926Z" level=info msg="CreateContainer within sandbox \"bf7c30da5952497f683b2add2c378f9f8a8c4be6cbc77b3abce04a4b7222fae4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 16:26:09.002156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311337557.mount: Deactivated successfully. Jun 25 16:26:09.010753 containerd[1794]: time="2024-06-25T16:26:09.010697115Z" level=info msg="CreateContainer within sandbox \"bf7c30da5952497f683b2add2c378f9f8a8c4be6cbc77b3abce04a4b7222fae4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"99123004b4ff39b0842f57babe8e1b5b6871cf15086aea850c3039f148173865\"" Jun 25 16:26:09.011758 containerd[1794]: time="2024-06-25T16:26:09.011722849Z" level=info msg="StartContainer for \"99123004b4ff39b0842f57babe8e1b5b6871cf15086aea850c3039f148173865\"" Jun 25 16:26:09.077123 systemd[1]: Started cri-containerd-99123004b4ff39b0842f57babe8e1b5b6871cf15086aea850c3039f148173865.scope - libcontainer container 99123004b4ff39b0842f57babe8e1b5b6871cf15086aea850c3039f148173865. Jun 25 16:26:09.125000 audit: BPF prog-id=191 op=LOAD Jun 25 16:26:09.125000 audit: BPF prog-id=192 op=LOAD Jun 25 16:26:09.125000 audit[5989]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3202 pid=5989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:09.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313233303034623466663339623038343266353762616265386531 Jun 25 16:26:09.126000 audit: BPF prog-id=193 op=LOAD Jun 25 16:26:09.126000 audit[5989]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3202 pid=5989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:09.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313233303034623466663339623038343266353762616265386531 Jun 25 16:26:09.126000 audit: BPF prog-id=193 op=UNLOAD Jun 25 16:26:09.126000 audit: BPF prog-id=192 op=UNLOAD Jun 25 16:26:09.127000 audit: BPF prog-id=194 op=LOAD Jun 25 16:26:09.127000 audit[5989]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3202 pid=5989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:09.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313233303034623466663339623038343266353762616265386531 Jun 25 16:26:09.154017 containerd[1794]: time="2024-06-25T16:26:09.153968684Z" level=info msg="StartContainer for \"99123004b4ff39b0842f57babe8e1b5b6871cf15086aea850c3039f148173865\" returns successfully" Jun 25 16:26:09.815000 audit[5938]: AVC avc: denied { watch } for pid=5938 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6320 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.815000 audit[5938]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0004d3ad0 a2=fc6 a3=0 items=0 ppid=2605 pid=5938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:26:09.815000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:09.815000 audit[5938]: AVC avc: denied { watch } for pid=5938 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6305 scontext=system_u:system_r:container_t:s0:c264,c282 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.815000 audit[5938]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0009760a0 a2=fc6 a3=0 items=0 ppid=2605 pid=5938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c264,c282 key=(null) Jun 25 16:26:09.815000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:12.066636 kubelet[2901]: E0625 16:26:12.066591 2901 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-32?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 16:26:12.855900 systemd[1]: cri-containerd-4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f.scope: Deactivated successfully. Jun 25 16:26:12.856383 systemd[1]: cri-containerd-4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f.scope: Consumed 1.405s CPU time. Jun 25 16:26:12.864999 kernel: kauditd_printk_skb: 24 callbacks suppressed Jun 25 16:26:12.865336 kernel: audit: type=1334 audit(1719332772.861:857): prog-id=84 op=UNLOAD Jun 25 16:26:12.865394 kernel: audit: type=1334 audit(1719332772.861:858): prog-id=95 op=UNLOAD Jun 25 16:26:12.861000 audit: BPF prog-id=84 op=UNLOAD Jun 25 16:26:12.861000 audit: BPF prog-id=95 op=UNLOAD Jun 25 16:26:12.890387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f-rootfs.mount: Deactivated successfully. Jun 25 16:26:12.892719 containerd[1794]: time="2024-06-25T16:26:12.892646877Z" level=info msg="shim disconnected" id=4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f namespace=k8s.io Jun 25 16:26:12.893148 containerd[1794]: time="2024-06-25T16:26:12.892720321Z" level=warning msg="cleaning up after shim disconnected" id=4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f namespace=k8s.io Jun 25 16:26:12.893148 containerd[1794]: time="2024-06-25T16:26:12.892733300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:26:13.882817 systemd[1]: run-containerd-runc-k8s.io-4384d4f6d9f40fdd415167585ce4b5668ea92b8e1b1b285e884e0344b5f2d479-runc.XSdHHL.mount: Deactivated successfully. Jun 25 16:26:13.954112 kubelet[2901]: I0625 16:26:13.954074 2901 scope.go:117] "RemoveContainer" containerID="4197b0c86323f45724b3fbc2ef7a4e85b097d7fa44034355c503deffc4a2fc5f" Jun 25 16:26:13.957457 containerd[1794]: time="2024-06-25T16:26:13.957414631Z" level=info msg="CreateContainer within sandbox \"559c1f6b6565479afe83e3b739abb332776be07834e3322e7afca6289ea0db0d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 16:26:13.983507 containerd[1794]: time="2024-06-25T16:26:13.982265220Z" level=info msg="CreateContainer within sandbox \"559c1f6b6565479afe83e3b739abb332776be07834e3322e7afca6289ea0db0d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"180ed0e39666fa466e5e4b2dac60762174a85c29d64ba60faf6211645fef8f25\"" Jun 25 16:26:13.984114 containerd[1794]: time="2024-06-25T16:26:13.984075275Z" level=info msg="StartContainer for \"180ed0e39666fa466e5e4b2dac60762174a85c29d64ba60faf6211645fef8f25\"" Jun 25 16:26:14.047136 systemd[1]: run-containerd-runc-k8s.io-180ed0e39666fa466e5e4b2dac60762174a85c29d64ba60faf6211645fef8f25-runc.gtIKQn.mount: Deactivated successfully. Jun 25 16:26:14.055127 systemd[1]: Started cri-containerd-180ed0e39666fa466e5e4b2dac60762174a85c29d64ba60faf6211645fef8f25.scope - libcontainer container 180ed0e39666fa466e5e4b2dac60762174a85c29d64ba60faf6211645fef8f25. Jun 25 16:26:14.068000 audit: BPF prog-id=195 op=LOAD Jun 25 16:26:14.070962 kernel: audit: type=1334 audit(1719332774.068:859): prog-id=195 op=LOAD Jun 25 16:26:14.075457 kernel: audit: type=1334 audit(1719332774.070:860): prog-id=196 op=LOAD Jun 25 16:26:14.075571 kernel: audit: type=1300 audit(1719332774.070:860): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2602 pid=6077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:14.070000 audit: BPF prog-id=196 op=LOAD Jun 25 16:26:14.070000 audit[6077]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2602 pid=6077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:14.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138306564306533393636366661343636653565346232646163363037 Jun 25 16:26:14.083993 kernel: audit: type=1327 audit(1719332774.070:860): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138306564306533393636366661343636653565346232646163363037 Jun 25 16:26:14.071000 audit: BPF prog-id=197 op=LOAD Jun 25 16:26:14.090569 kernel: audit: type=1334 audit(1719332774.071:861): prog-id=197 op=LOAD Jun 25 16:26:14.090669 kernel: audit: type=1300 audit(1719332774.071:861): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2602 pid=6077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:14.071000 audit[6077]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2602 pid=6077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:14.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138306564306533393636366661343636653565346232646163363037 Jun 25 16:26:14.095194 kernel: audit: type=1327 audit(1719332774.071:861): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138306564306533393636366661343636653565346232646163363037 Jun 25 16:26:14.095832 kernel: audit: type=1334 audit(1719332774.071:862): prog-id=197 op=UNLOAD Jun 25 16:26:14.071000 audit: BPF prog-id=197 op=UNLOAD Jun 25 16:26:14.071000 audit: BPF prog-id=196 op=UNLOAD Jun 25 16:26:14.071000 audit: BPF prog-id=198 op=LOAD Jun 25 16:26:14.071000 audit[6077]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2602 pid=6077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:14.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138306564306533393636366661343636653565346232646163363037 Jun 25 16:26:14.118460 containerd[1794]: time="2024-06-25T16:26:14.118412427Z" level=info msg="StartContainer for \"180ed0e39666fa466e5e4b2dac60762174a85c29d64ba60faf6211645fef8f25\" returns successfully" Jun 25 16:26:22.067502 kubelet[2901]: E0625 16:26:22.067462 2901 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-32?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 16:26:23.702387 systemd[1]: run-containerd-runc-k8s.io-fef8e38c4abf80c6bec2593267104cda22a39eeca67684c49b72fc5f2af8c615-runc.y7f5zT.mount: Deactivated successfully.