Jul 2 06:54:12.896704 kernel: Linux version 6.1.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 23:29:55 -00 2024 Jul 2 06:54:12.896739 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:54:12.896753 kernel: BIOS-provided physical RAM map: Jul 2 06:54:12.896764 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 06:54:12.896774 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 06:54:12.896784 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 06:54:12.896799 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jul 2 06:54:12.896811 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jul 2 06:54:12.896821 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jul 2 06:54:12.896832 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 06:54:12.896843 kernel: NX (Execute Disable) protection: active Jul 2 06:54:12.896854 kernel: SMBIOS 2.7 present. Jul 2 06:54:12.896864 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 2 06:54:12.896875 kernel: Hypervisor detected: KVM Jul 2 06:54:12.896892 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 06:54:12.896903 kernel: kvm-clock: using sched offset of 6997389430 cycles Jul 2 06:54:12.896916 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 06:54:12.896928 kernel: tsc: Detected 2499.998 MHz processor Jul 2 06:54:12.896940 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 06:54:12.896953 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 06:54:12.896965 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jul 2 06:54:12.896979 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 06:54:12.896991 kernel: Using GB pages for direct mapping Jul 2 06:54:12.897003 kernel: ACPI: Early table checksum verification disabled Jul 2 06:54:12.897015 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jul 2 06:54:12.897027 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jul 2 06:54:12.897039 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 06:54:12.897052 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 2 06:54:12.897064 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jul 2 06:54:12.897078 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 06:54:12.897090 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 06:54:12.897102 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 2 06:54:12.897114 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 06:54:12.897126 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 2 06:54:12.897139 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 2 06:54:12.897150 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 06:54:12.897162 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jul 2 06:54:12.897175 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jul 2 06:54:12.897189 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jul 2 06:54:12.897201 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jul 2 06:54:12.897219 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jul 2 06:54:12.897232 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jul 2 06:54:12.897244 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jul 2 06:54:12.897258 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jul 2 06:54:12.897274 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jul 2 06:54:12.897286 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jul 2 06:54:12.897300 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 06:54:12.897313 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 06:54:12.897326 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 2 06:54:12.897339 kernel: NUMA: Initialized distance table, cnt=1 Jul 2 06:54:12.897352 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jul 2 06:54:12.897364 kernel: Zone ranges: Jul 2 06:54:12.897377 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 06:54:12.897393 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jul 2 06:54:12.897406 kernel: Normal empty Jul 2 06:54:12.897419 kernel: Movable zone start for each node Jul 2 06:54:12.897432 kernel: Early memory node ranges Jul 2 06:54:12.897446 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 06:54:12.897458 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jul 2 06:54:12.897471 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jul 2 06:54:12.897494 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 06:54:12.897507 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 06:54:12.897524 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jul 2 06:54:12.897537 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 06:54:12.897552 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 06:54:12.897566 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 2 06:54:12.897592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 06:54:12.897606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 06:54:12.897621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 06:54:12.897635 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 06:54:12.897649 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 06:54:12.897667 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 06:54:12.897681 kernel: TSC deadline timer available Jul 2 06:54:12.897695 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 06:54:12.897709 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jul 2 06:54:12.897723 kernel: Booting paravirtualized kernel on KVM Jul 2 06:54:12.897737 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 06:54:12.897752 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 06:54:12.897766 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jul 2 06:54:12.897780 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jul 2 06:54:12.897797 kernel: pcpu-alloc: [0] 0 1 Jul 2 06:54:12.897811 kernel: kvm-guest: PV spinlocks enabled Jul 2 06:54:12.897825 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 06:54:12.897839 kernel: Fallback order for Node 0: 0 Jul 2 06:54:12.897853 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jul 2 06:54:12.897866 kernel: Policy zone: DMA32 Jul 2 06:54:12.897882 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:54:12.897897 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 06:54:12.897914 kernel: random: crng init done Jul 2 06:54:12.897927 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 06:54:12.897942 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 06:54:12.897956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 06:54:12.897971 kernel: Memory: 1928268K/2057760K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 129232K reserved, 0K cma-reserved) Jul 2 06:54:12.897985 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 06:54:12.897999 kernel: Kernel/User page tables isolation: enabled Jul 2 06:54:12.898013 kernel: ftrace: allocating 36081 entries in 141 pages Jul 2 06:54:12.898027 kernel: ftrace: allocated 141 pages with 4 groups Jul 2 06:54:12.898044 kernel: Dynamic Preempt: voluntary Jul 2 06:54:12.898058 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 06:54:12.898073 kernel: rcu: RCU event tracing is enabled. Jul 2 06:54:12.898087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 06:54:12.898102 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 06:54:12.898116 kernel: Rude variant of Tasks RCU enabled. Jul 2 06:54:12.898130 kernel: Tracing variant of Tasks RCU enabled. Jul 2 06:54:12.898144 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 06:54:12.898159 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 06:54:12.898176 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 06:54:12.898190 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 06:54:12.898204 kernel: Console: colour VGA+ 80x25 Jul 2 06:54:12.898218 kernel: printk: console [ttyS0] enabled Jul 2 06:54:12.898232 kernel: ACPI: Core revision 20220331 Jul 2 06:54:12.898246 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 2 06:54:12.898260 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 06:54:12.898274 kernel: x2apic enabled Jul 2 06:54:12.898288 kernel: Switched APIC routing to physical x2apic. Jul 2 06:54:12.898303 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 2 06:54:12.898320 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jul 2 06:54:12.898334 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 06:54:12.898359 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 06:54:12.898376 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 06:54:12.898391 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 06:54:12.898406 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 06:54:12.898420 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 06:54:12.898435 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 06:54:12.898450 kernel: RETBleed: Vulnerable Jul 2 06:54:12.898618 kernel: Speculative Store Bypass: Vulnerable Jul 2 06:54:12.898634 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 06:54:12.898648 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 06:54:12.898663 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 06:54:12.898682 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 06:54:12.898697 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 06:54:12.898712 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 06:54:12.898727 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 06:54:12.898742 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 06:54:12.898759 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 06:54:12.898774 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 06:54:12.898789 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 06:54:12.898803 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 2 06:54:12.898818 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 06:54:12.898833 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 06:54:12.898848 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 06:54:12.898862 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 2 06:54:12.898877 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 2 06:54:12.898892 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 2 06:54:12.898906 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 2 06:54:12.898921 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 2 06:54:12.898939 kernel: Freeing SMP alternatives memory: 32K Jul 2 06:54:12.898954 kernel: pid_max: default: 32768 minimum: 301 Jul 2 06:54:12.898968 kernel: LSM: Security Framework initializing Jul 2 06:54:12.898983 kernel: SELinux: Initializing. Jul 2 06:54:12.898998 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 06:54:12.899013 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 06:54:12.899028 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 06:54:12.899043 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:54:12.899058 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 06:54:12.899074 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:54:12.899089 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 06:54:12.899106 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:54:12.899121 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 06:54:12.899136 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 06:54:12.899151 kernel: signal: max sigframe size: 3632 Jul 2 06:54:12.899166 kernel: rcu: Hierarchical SRCU implementation. Jul 2 06:54:12.899181 kernel: rcu: Max phase no-delay instances is 400. Jul 2 06:54:12.899196 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 06:54:12.899211 kernel: smp: Bringing up secondary CPUs ... Jul 2 06:54:12.899226 kernel: x86: Booting SMP configuration: Jul 2 06:54:12.899292 kernel: .... node #0, CPUs: #1 Jul 2 06:54:12.899310 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 06:54:12.899457 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 06:54:12.899475 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 06:54:12.899490 kernel: smpboot: Max logical packages: 1 Jul 2 06:54:12.899537 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jul 2 06:54:12.899556 kernel: devtmpfs: initialized Jul 2 06:54:12.899569 kernel: x86/mm: Memory block size: 128MB Jul 2 06:54:12.907680 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 06:54:12.907741 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 06:54:12.907757 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 06:54:12.907901 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 06:54:12.908029 kernel: audit: initializing netlink subsys (disabled) Jul 2 06:54:12.908047 kernel: audit: type=2000 audit(1719903252.284:1): state=initialized audit_enabled=0 res=1 Jul 2 06:54:12.908067 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 06:54:12.908083 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 06:54:12.908098 kernel: cpuidle: using governor menu Jul 2 06:54:12.908113 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 06:54:12.908133 kernel: dca service started, version 1.12.1 Jul 2 06:54:12.908148 kernel: PCI: Using configuration type 1 for base access Jul 2 06:54:12.908163 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 06:54:12.908179 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 06:54:12.908194 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 06:54:12.908209 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 06:54:12.908224 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 06:54:12.908239 kernel: ACPI: Added _OSI(Module Device) Jul 2 06:54:12.908255 kernel: ACPI: Added _OSI(Processor Device) Jul 2 06:54:12.908272 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 06:54:12.908287 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 06:54:12.908303 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 06:54:12.908318 kernel: ACPI: Interpreter enabled Jul 2 06:54:12.908333 kernel: ACPI: PM: (supports S0 S5) Jul 2 06:54:12.908348 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 06:54:12.908363 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 06:54:12.908378 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 06:54:12.908393 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 06:54:12.908412 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 06:54:12.908669 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 06:54:12.908801 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 06:54:12.908926 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 06:54:12.908945 kernel: acpiphp: Slot [3] registered Jul 2 06:54:12.908961 kernel: acpiphp: Slot [4] registered Jul 2 06:54:12.908977 kernel: acpiphp: Slot [5] registered Jul 2 06:54:12.908996 kernel: acpiphp: Slot [6] registered Jul 2 06:54:12.909011 kernel: acpiphp: Slot [7] registered Jul 2 06:54:12.909026 kernel: acpiphp: Slot [8] registered Jul 2 06:54:12.909041 kernel: acpiphp: Slot [9] registered Jul 2 06:54:12.909056 kernel: acpiphp: Slot [10] registered Jul 2 06:54:12.909071 kernel: acpiphp: Slot [11] registered Jul 2 06:54:12.909086 kernel: acpiphp: Slot [12] registered Jul 2 06:54:12.909100 kernel: acpiphp: Slot [13] registered Jul 2 06:54:12.909115 kernel: acpiphp: Slot [14] registered Jul 2 06:54:12.909130 kernel: acpiphp: Slot [15] registered Jul 2 06:54:12.909148 kernel: acpiphp: Slot [16] registered Jul 2 06:54:12.909163 kernel: acpiphp: Slot [17] registered Jul 2 06:54:12.909178 kernel: acpiphp: Slot [18] registered Jul 2 06:54:12.909193 kernel: acpiphp: Slot [19] registered Jul 2 06:54:12.909208 kernel: acpiphp: Slot [20] registered Jul 2 06:54:12.909223 kernel: acpiphp: Slot [21] registered Jul 2 06:54:12.909238 kernel: acpiphp: Slot [22] registered Jul 2 06:54:12.909253 kernel: acpiphp: Slot [23] registered Jul 2 06:54:12.909267 kernel: acpiphp: Slot [24] registered Jul 2 06:54:12.909285 kernel: acpiphp: Slot [25] registered Jul 2 06:54:12.909300 kernel: acpiphp: Slot [26] registered Jul 2 06:54:12.909314 kernel: acpiphp: Slot [27] registered Jul 2 06:54:12.909329 kernel: acpiphp: Slot [28] registered Jul 2 06:54:12.909340 kernel: acpiphp: Slot [29] registered Jul 2 06:54:12.909355 kernel: acpiphp: Slot [30] registered Jul 2 06:54:12.909370 kernel: acpiphp: Slot [31] registered Jul 2 06:54:12.909385 kernel: PCI host bridge to bus 0000:00 Jul 2 06:54:12.909513 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 06:54:12.909655 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 06:54:12.909770 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 06:54:12.909881 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 06:54:12.909993 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 06:54:12.910135 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 06:54:12.910271 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 06:54:12.910405 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 2 06:54:12.910537 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 06:54:12.910680 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 06:54:12.910806 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 2 06:54:12.910932 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 2 06:54:12.911057 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 2 06:54:12.911181 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 2 06:54:12.911310 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 2 06:54:12.911489 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 2 06:54:12.915785 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 28320 usecs Jul 2 06:54:12.915985 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 2 06:54:12.916139 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jul 2 06:54:12.916263 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 06:54:12.916384 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 06:54:12.916518 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 06:54:12.916657 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jul 2 06:54:12.916785 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 06:54:12.917033 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jul 2 06:54:12.917054 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 06:54:12.917070 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 06:54:12.917084 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 06:54:12.917099 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 06:54:12.917119 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 06:54:12.917133 kernel: iommu: Default domain type: Translated Jul 2 06:54:12.917148 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 06:54:12.917382 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 06:54:12.917405 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 06:54:12.917420 kernel: PTP clock support registered Jul 2 06:54:12.917435 kernel: PCI: Using ACPI for IRQ routing Jul 2 06:54:12.917450 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 06:54:12.917465 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 06:54:12.917484 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jul 2 06:54:12.917650 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 2 06:54:12.917776 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 2 06:54:12.917898 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 06:54:12.917916 kernel: vgaarb: loaded Jul 2 06:54:12.917932 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 2 06:54:12.917947 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 2 06:54:12.917961 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 06:54:12.917980 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 06:54:12.917995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 06:54:12.918010 kernel: pnp: PnP ACPI init Jul 2 06:54:12.918024 kernel: pnp: PnP ACPI: found 5 devices Jul 2 06:54:12.918039 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 06:54:12.918054 kernel: NET: Registered PF_INET protocol family Jul 2 06:54:12.918069 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 06:54:12.918083 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 06:54:12.918098 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 06:54:12.918116 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 06:54:12.918130 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 06:54:12.918300 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 06:54:12.918316 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 06:54:12.918331 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 06:54:12.918345 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 06:54:12.918360 kernel: NET: Registered PF_XDP protocol family Jul 2 06:54:12.918491 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 06:54:12.918623 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 06:54:12.918733 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 06:54:12.918843 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 06:54:12.918971 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 06:54:12.918990 kernel: PCI: CLS 0 bytes, default 64 Jul 2 06:54:12.919005 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 06:54:12.919020 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 2 06:54:12.919035 kernel: clocksource: Switched to clocksource tsc Jul 2 06:54:12.919053 kernel: Initialise system trusted keyrings Jul 2 06:54:12.919068 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 06:54:12.919083 kernel: Key type asymmetric registered Jul 2 06:54:12.919097 kernel: Asymmetric key parser 'x509' registered Jul 2 06:54:12.919112 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jul 2 06:54:12.919127 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 06:54:12.919142 kernel: io scheduler mq-deadline registered Jul 2 06:54:12.919157 kernel: io scheduler kyber registered Jul 2 06:54:12.919171 kernel: io scheduler bfq registered Jul 2 06:54:12.919189 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 06:54:12.919204 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 06:54:12.919219 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 06:54:12.919234 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 06:54:12.919248 kernel: i8042: Warning: Keylock active Jul 2 06:54:12.919263 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 06:54:12.919278 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 06:54:12.919405 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 06:54:12.919518 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 06:54:12.923918 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T06:54:12 UTC (1719903252) Jul 2 06:54:12.924290 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 06:54:12.924371 kernel: intel_pstate: CPU model not supported Jul 2 06:54:12.924389 kernel: NET: Registered PF_INET6 protocol family Jul 2 06:54:12.924405 kernel: Segment Routing with IPv6 Jul 2 06:54:12.924420 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 06:54:12.924435 kernel: NET: Registered PF_PACKET protocol family Jul 2 06:54:12.924450 kernel: Key type dns_resolver registered Jul 2 06:54:12.924472 kernel: IPI shorthand broadcast: enabled Jul 2 06:54:12.924487 kernel: sched_clock: Marking stable (677013559, 352308147)->(1145411513, -116089807) Jul 2 06:54:12.924502 kernel: registered taskstats version 1 Jul 2 06:54:12.924517 kernel: Loading compiled-in X.509 certificates Jul 2 06:54:12.924532 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.96-flatcar: ad4c54fcfdf0a10b17828c4377e868762dc43797' Jul 2 06:54:12.924547 kernel: Key type .fscrypt registered Jul 2 06:54:12.924561 kernel: Key type fscrypt-provisioning registered Jul 2 06:54:12.924588 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 06:54:12.924604 kernel: ima: Allocated hash algorithm: sha1 Jul 2 06:54:12.924622 kernel: ima: No architecture policies found Jul 2 06:54:12.924637 kernel: clk: Disabling unused clocks Jul 2 06:54:12.924652 kernel: Freeing unused kernel image (initmem) memory: 47156K Jul 2 06:54:12.924666 kernel: Write protecting the kernel read-only data: 34816k Jul 2 06:54:12.924681 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 06:54:12.924748 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jul 2 06:54:12.924764 kernel: Run /init as init process Jul 2 06:54:12.924779 kernel: with arguments: Jul 2 06:54:12.924843 kernel: /init Jul 2 06:54:12.924862 kernel: with environment: Jul 2 06:54:12.924923 kernel: HOME=/ Jul 2 06:54:12.924942 kernel: TERM=linux Jul 2 06:54:12.924957 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 06:54:12.924977 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 06:54:12.925021 systemd[1]: Detected virtualization amazon. Jul 2 06:54:12.925038 systemd[1]: Detected architecture x86-64. Jul 2 06:54:12.925056 systemd[1]: Running in initrd. Jul 2 06:54:12.925096 systemd[1]: No hostname configured, using default hostname. Jul 2 06:54:12.925112 systemd[1]: Hostname set to . Jul 2 06:54:12.925129 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:54:12.925145 systemd[1]: Queued start job for default target initrd.target. Jul 2 06:54:12.925186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:54:12.925203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:54:12.925219 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:54:12.925440 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:54:12.925458 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:54:12.925475 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:54:12.925492 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:54:12.925509 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:54:12.925526 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 06:54:12.925542 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 06:54:12.925561 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 06:54:12.925589 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:54:12.925606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:54:12.925622 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:54:12.925638 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:54:12.925655 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:54:12.925671 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 06:54:12.925687 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 06:54:12.925703 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:54:12.925723 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:54:12.925739 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jul 2 06:54:12.925756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:54:12.925772 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 06:54:12.925791 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 06:54:12.925816 systemd-journald[180]: Journal started Jul 2 06:54:12.925891 systemd-journald[180]: Runtime Journal (/run/log/journal/ec267de58ff84e11569dd7e873a5a5f6) is 4.8M, max 38.6M, 33.8M free. Jul 2 06:54:12.894614 systemd-modules-load[181]: Inserted module 'overlay' Jul 2 06:54:13.045696 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 06:54:13.045731 kernel: Bridge firewalling registered Jul 2 06:54:13.045748 kernel: SCSI subsystem initialized Jul 2 06:54:13.045765 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 06:54:13.045782 kernel: device-mapper: uevent: version 1.0.3 Jul 2 06:54:13.045798 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jul 2 06:54:13.045814 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 06:54:12.939225 systemd-modules-load[181]: Inserted module 'br_netfilter' Jul 2 06:54:13.048809 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:54:12.992569 systemd-modules-load[181]: Inserted module 'dm_multipath' Jul 2 06:54:13.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.049159 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:54:13.056508 kernel: audit: type=1130 audit(1719903253.048:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.056542 kernel: audit: type=1130 audit(1719903253.052:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.056844 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 06:54:13.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.059525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:54:13.064197 kernel: audit: type=1130 audit(1719903253.058:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.066651 kernel: audit: type=1130 audit(1719903253.061:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.066823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 06:54:13.069961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:54:13.072843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:54:13.091386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:54:13.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.093595 kernel: audit: type=1130 audit(1719903253.090:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.096000 audit: BPF prog-id=6 op=LOAD Jul 2 06:54:13.098604 kernel: audit: type=1334 audit(1719903253.096:7): prog-id=6 op=LOAD Jul 2 06:54:13.100269 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:54:13.103188 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:54:13.107344 kernel: audit: type=1130 audit(1719903253.102:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.109545 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:54:13.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.115612 kernel: audit: type=1130 audit(1719903253.111:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.115881 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 06:54:13.138012 dracut-cmdline[206]: dracut-dracut-053 Jul 2 06:54:13.148686 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:54:13.203087 systemd-resolved[201]: Positive Trust Anchors: Jul 2 06:54:13.203110 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:54:13.203156 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 06:54:13.215908 systemd-resolved[201]: Defaulting to hostname 'linux'. Jul 2 06:54:13.218294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:54:13.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.220645 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:54:13.229181 kernel: audit: type=1130 audit(1719903253.219:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.261611 kernel: Loading iSCSI transport class v2.0-870. Jul 2 06:54:13.274614 kernel: iscsi: registered transport (tcp) Jul 2 06:54:13.298614 kernel: iscsi: registered transport (qla4xxx) Jul 2 06:54:13.298686 kernel: QLogic iSCSI HBA Driver Jul 2 06:54:13.337001 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 06:54:13.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.341849 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 06:54:13.414636 kernel: raid6: avx512x4 gen() 17269 MB/s Jul 2 06:54:13.431739 kernel: raid6: avx512x2 gen() 16379 MB/s Jul 2 06:54:13.448628 kernel: raid6: avx512x1 gen() 14666 MB/s Jul 2 06:54:13.465631 kernel: raid6: avx2x4 gen() 16376 MB/s Jul 2 06:54:13.482632 kernel: raid6: avx2x2 gen() 14950 MB/s Jul 2 06:54:13.499612 kernel: raid6: avx2x1 gen() 12908 MB/s Jul 2 06:54:13.499689 kernel: raid6: using algorithm avx512x4 gen() 17269 MB/s Jul 2 06:54:13.516616 kernel: raid6: .... xor() 7092 MB/s, rmw enabled Jul 2 06:54:13.516684 kernel: raid6: using avx512x2 recovery algorithm Jul 2 06:54:13.519614 kernel: xor: automatically using best checksumming function avx Jul 2 06:54:13.675605 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 06:54:13.686066 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:54:13.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.686000 audit: BPF prog-id=7 op=LOAD Jul 2 06:54:13.686000 audit: BPF prog-id=8 op=LOAD Jul 2 06:54:13.693843 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:54:13.715649 systemd-udevd[382]: Using default interface naming scheme 'v252'. Jul 2 06:54:13.721280 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:54:13.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.727778 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 06:54:13.745961 dracut-pre-trigger[386]: rd.md=0: removing MD RAID activation Jul 2 06:54:13.778103 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:54:13.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.791851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:54:13.873109 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:54:13.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:13.945607 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 06:54:13.951909 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 06:54:13.967257 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 06:54:13.967428 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 2 06:54:13.967561 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:e4:a5:09:be:6f Jul 2 06:54:13.969152 (udev-worker)[431]: Network interface NamePolicy= disabled on kernel command line. Jul 2 06:54:13.981612 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 06:54:13.981687 kernel: AES CTR mode by8 optimization enabled Jul 2 06:54:14.027609 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 06:54:14.027851 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 06:54:14.037601 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 06:54:14.040617 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 06:54:14.040670 kernel: GPT:9289727 != 16777215 Jul 2 06:54:14.040687 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 06:54:14.040704 kernel: GPT:9289727 != 16777215 Jul 2 06:54:14.040719 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 06:54:14.040743 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 06:54:14.124664 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (427) Jul 2 06:54:14.136745 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 06:54:14.167612 kernel: BTRFS: device fsid 1fca1e64-eeea-4360-9664-a9b6b3a60b6f devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (421) Jul 2 06:54:14.169186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 06:54:14.245259 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 06:54:14.257437 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 06:54:14.257615 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 06:54:14.273029 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 06:54:14.281762 disk-uuid[589]: Primary Header is updated. Jul 2 06:54:14.281762 disk-uuid[589]: Secondary Entries is updated. Jul 2 06:54:14.281762 disk-uuid[589]: Secondary Header is updated. Jul 2 06:54:14.287607 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 06:54:14.295607 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 06:54:14.302607 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 06:54:15.299372 disk-uuid[590]: The operation has completed successfully. Jul 2 06:54:15.301157 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 06:54:15.497235 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 06:54:15.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:15.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:15.497369 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 06:54:15.519017 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 06:54:15.524215 sh[930]: Success Jul 2 06:54:15.547600 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 06:54:15.638850 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 06:54:15.652350 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 06:54:15.657246 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 06:54:15.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:15.673601 kernel: BTRFS info (device dm-0): first mount of filesystem 1fca1e64-eeea-4360-9664-a9b6b3a60b6f Jul 2 06:54:15.673663 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:54:15.673681 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 06:54:15.674730 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 06:54:15.674752 kernel: BTRFS info (device dm-0): using free space tree Jul 2 06:54:15.774610 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 06:54:15.797191 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 06:54:15.801226 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 06:54:15.814883 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 06:54:15.824913 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 06:54:15.839523 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:54:15.839671 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:54:15.839696 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 06:54:15.850609 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 06:54:15.861348 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 06:54:15.864075 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:54:15.870600 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 06:54:15.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:15.875760 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 06:54:15.937028 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:54:15.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:15.937000 audit: BPF prog-id=9 op=LOAD Jul 2 06:54:15.944879 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:54:15.970391 systemd-networkd[1121]: lo: Link UP Jul 2 06:54:15.970404 systemd-networkd[1121]: lo: Gained carrier Jul 2 06:54:15.970940 systemd-networkd[1121]: Enumeration completed Jul 2 06:54:15.971045 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:54:15.971379 systemd-networkd[1121]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:54:15.971382 systemd-networkd[1121]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 06:54:15.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:15.980445 systemd[1]: Reached target network.target - Network. Jul 2 06:54:15.990155 systemd-networkd[1121]: eth0: Link UP Jul 2 06:54:15.990165 systemd-networkd[1121]: eth0: Gained carrier Jul 2 06:54:15.990176 systemd-networkd[1121]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:54:15.992231 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 06:54:16.003738 systemd-networkd[1121]: eth0: DHCPv4 address 172.31.18.4/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 06:54:16.009245 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 06:54:16.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.023191 systemd[1]: Starting iscsid.service - Open-iSCSI... Jul 2 06:54:16.028352 iscsid[1127]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 06:54:16.030369 iscsid[1127]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 06:54:16.030369 iscsid[1127]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 06:54:16.030369 iscsid[1127]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 06:54:16.030369 iscsid[1127]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 06:54:16.030369 iscsid[1127]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 06:54:16.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.030407 systemd[1]: Started iscsid.service - Open-iSCSI. Jul 2 06:54:16.057219 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 06:54:16.076621 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 06:54:16.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.076880 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:54:16.080145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:54:16.081344 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:54:16.085932 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 06:54:16.100411 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:54:16.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.183320 ignition[1052]: Ignition 2.15.0 Jul 2 06:54:16.183334 ignition[1052]: Stage: fetch-offline Jul 2 06:54:16.183574 ignition[1052]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:16.183617 ignition[1052]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 06:54:16.184603 ignition[1052]: Ignition finished successfully Jul 2 06:54:16.188750 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:54:16.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.196866 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 06:54:16.210877 ignition[1146]: Ignition 2.15.0 Jul 2 06:54:16.210888 ignition[1146]: Stage: fetch Jul 2 06:54:16.211138 ignition[1146]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:16.211147 ignition[1146]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 06:54:16.211228 ignition[1146]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 06:54:16.249857 ignition[1146]: PUT result: OK Jul 2 06:54:16.252882 ignition[1146]: parsed url from cmdline: "" Jul 2 06:54:16.252890 ignition[1146]: no config URL provided Jul 2 06:54:16.252897 ignition[1146]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 06:54:16.252909 ignition[1146]: no config at "/usr/lib/ignition/user.ign" Jul 2 06:54:16.252935 ignition[1146]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 06:54:16.255906 ignition[1146]: PUT result: OK Jul 2 06:54:16.255957 ignition[1146]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 06:54:16.258845 ignition[1146]: GET result: OK Jul 2 06:54:16.258956 ignition[1146]: parsing config with SHA512: 4b61a3056c551cfe7668d2f108c4c7dac40828903d09c81d4630d65e1d39d24cd3143ade1c6702bd6722c071ad820b1707937d2310fbaf444558abb403342b5a Jul 2 06:54:16.263397 unknown[1146]: fetched base config from "system" Jul 2 06:54:16.263410 unknown[1146]: fetched base config from "system" Jul 2 06:54:16.264006 ignition[1146]: fetch: fetch complete Jul 2 06:54:16.263416 unknown[1146]: fetched user config from "aws" Jul 2 06:54:16.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.264012 ignition[1146]: fetch: fetch passed Jul 2 06:54:16.265873 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 06:54:16.264277 ignition[1146]: Ignition finished successfully Jul 2 06:54:16.272762 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 06:54:16.292183 ignition[1152]: Ignition 2.15.0 Jul 2 06:54:16.292210 ignition[1152]: Stage: kargs Jul 2 06:54:16.292536 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:16.292546 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 06:54:16.292667 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 06:54:16.296009 ignition[1152]: PUT result: OK Jul 2 06:54:16.300066 ignition[1152]: kargs: kargs passed Jul 2 06:54:16.300141 ignition[1152]: Ignition finished successfully Jul 2 06:54:16.302435 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 06:54:16.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.316114 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 06:54:16.331932 ignition[1158]: Ignition 2.15.0 Jul 2 06:54:16.331946 ignition[1158]: Stage: disks Jul 2 06:54:16.332306 ignition[1158]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:16.332321 ignition[1158]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 06:54:16.332530 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 06:54:16.334008 ignition[1158]: PUT result: OK Jul 2 06:54:16.340442 ignition[1158]: disks: disks passed Jul 2 06:54:16.340615 ignition[1158]: Ignition finished successfully Jul 2 06:54:16.343009 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 06:54:16.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.344291 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 06:54:16.347601 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:54:16.348944 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:54:16.352245 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:54:16.356561 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:54:16.369065 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 06:54:16.406428 systemd-fsck[1166]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 06:54:16.413241 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 06:54:16.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.419800 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 06:54:16.623709 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Quota mode: none. Jul 2 06:54:16.624392 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 06:54:16.625402 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 06:54:16.644779 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:54:16.673761 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 06:54:16.688620 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1183) Jul 2 06:54:16.688663 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:54:16.688681 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:54:16.688699 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 06:54:16.676397 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 06:54:16.676477 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 06:54:16.697699 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 06:54:16.676514 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:54:16.700409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:54:16.701498 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 06:54:16.714419 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 06:54:17.069944 initrd-setup-root[1207]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 06:54:17.096210 initrd-setup-root[1214]: cut: /sysroot/etc/group: No such file or directory Jul 2 06:54:17.113264 initrd-setup-root[1221]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 06:54:17.127407 initrd-setup-root[1228]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 06:54:17.413767 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 06:54:17.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:17.416731 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 2 06:54:17.416765 kernel: audit: type=1130 audit(1719903257.413:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:17.420823 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 06:54:17.423899 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 06:54:17.432561 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 06:54:17.435651 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:54:17.472910 ignition[1294]: INFO : Ignition 2.15.0 Jul 2 06:54:17.472910 ignition[1294]: INFO : Stage: mount Jul 2 06:54:17.475409 ignition[1294]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:17.475409 ignition[1294]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 06:54:17.475409 ignition[1294]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 06:54:17.481025 ignition[1294]: INFO : PUT result: OK Jul 2 06:54:17.485107 ignition[1294]: INFO : mount: mount passed Jul 2 06:54:17.485993 ignition[1294]: INFO : Ignition finished successfully Jul 2 06:54:17.486912 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 06:54:17.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:17.493604 kernel: audit: type=1130 audit(1719903257.489:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:17.493829 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 06:54:17.501784 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 06:54:17.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:17.506596 kernel: audit: type=1130 audit(1719903257.502:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:17.510019 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:54:17.523603 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1305) Jul 2 06:54:17.523662 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:54:17.524597 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:54:17.525821 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 06:54:17.532611 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 06:54:17.535035 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:54:17.562970 ignition[1323]: INFO : Ignition 2.15.0 Jul 2 06:54:17.562970 ignition[1323]: INFO : Stage: files Jul 2 06:54:17.565038 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:17.565038 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 06:54:17.565038 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 06:54:17.565038 ignition[1323]: INFO : PUT result: OK Jul 2 06:54:17.571586 ignition[1323]: DEBUG : files: compiled without relabeling support, skipping Jul 2 06:54:17.573315 ignition[1323]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 06:54:17.573315 ignition[1323]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 06:54:17.604212 ignition[1323]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 06:54:17.605736 ignition[1323]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 06:54:17.607597 unknown[1323]: wrote ssh authorized keys file for user: core Jul 2 06:54:17.608930 ignition[1323]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 06:54:17.611440 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:54:17.613365 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 06:54:17.671491 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 06:54:17.772685 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:54:17.775023 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 06:54:17.776884 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 06:54:17.776884 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:54:17.781358 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:54:17.783147 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:54:17.785318 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:54:17.787939 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:54:17.790311 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:54:17.792146 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:54:17.794990 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:54:17.797088 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 06:54:17.799660 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 06:54:17.799660 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 06:54:17.804602 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 06:54:17.993774 systemd-networkd[1121]: eth0: Gained IPv6LL Jul 2 06:54:18.205598 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 06:54:19.742223 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 06:54:19.742223 ignition[1323]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 06:54:19.751675 ignition[1323]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:54:19.757993 ignition[1323]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:54:19.757993 ignition[1323]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 06:54:19.757993 ignition[1323]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 06:54:19.764619 ignition[1323]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 06:54:19.766034 ignition[1323]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:54:19.767963 ignition[1323]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:54:19.770038 ignition[1323]: INFO : files: files passed Jul 2 06:54:19.770038 ignition[1323]: INFO : Ignition finished successfully Jul 2 06:54:19.772505 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 06:54:19.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.779691 kernel: audit: type=1130 audit(1719903259.774:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.780798 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 06:54:19.783089 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 06:54:19.786856 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 06:54:19.796848 kernel: audit: type=1130 audit(1719903259.791:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.796895 kernel: audit: type=1131 audit(1719903259.791:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.786985 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 06:54:19.805852 initrd-setup-root-after-ignition[1349]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:54:19.805852 initrd-setup-root-after-ignition[1349]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:54:19.811066 initrd-setup-root-after-ignition[1353]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:54:19.813539 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:54:19.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.814938 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 06:54:19.820197 kernel: audit: type=1130 audit(1719903259.814:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.828856 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 06:54:19.854785 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 06:54:19.854935 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 06:54:19.861665 kernel: audit: type=1130 audit(1719903259.856:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.861699 kernel: audit: type=1131 audit(1719903259.858:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.860613 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 06:54:19.863526 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 06:54:19.865708 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 06:54:19.879504 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 06:54:19.893929 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:54:19.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.899597 kernel: audit: type=1130 audit(1719903259.893:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.900852 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 06:54:19.913264 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:54:19.913501 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:54:19.917353 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 06:54:19.920782 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 06:54:19.921863 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:54:19.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.924715 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 06:54:19.927008 systemd[1]: Stopped target basic.target - Basic System. Jul 2 06:54:19.928796 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 06:54:19.931168 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:54:19.933239 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 06:54:19.935427 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 06:54:19.937639 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:54:19.943533 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 06:54:19.948834 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 06:54:19.951588 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:54:19.952166 systemd[1]: Stopped target swap.target - Swaps. Jul 2 06:54:19.956970 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 06:54:19.957154 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:54:19.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.961812 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:54:19.968599 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 06:54:19.968722 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 06:54:19.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.973602 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 06:54:19.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.973777 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:54:19.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:19.978435 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 06:54:19.978658 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 06:54:19.986462 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 06:54:19.989971 iscsid[1127]: iscsid shutting down. Jul 2 06:54:19.989226 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jul 2 06:54:19.991678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 06:54:19.992782 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:54:19.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.005818 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 06:54:20.009796 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 06:54:20.011689 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:54:20.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.015790 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 06:54:20.021163 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:54:20.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.029714 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 06:54:20.030075 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jul 2 06:54:20.034709 ignition[1367]: INFO : Ignition 2.15.0 Jul 2 06:54:20.034709 ignition[1367]: INFO : Stage: umount Jul 2 06:54:20.034709 ignition[1367]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:20.034709 ignition[1367]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 06:54:20.034709 ignition[1367]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 06:54:20.047478 ignition[1367]: INFO : PUT result: OK Jul 2 06:54:20.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.041224 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 06:54:20.053230 ignition[1367]: INFO : umount: umount passed Jul 2 06:54:20.053230 ignition[1367]: INFO : Ignition finished successfully Jul 2 06:54:20.054677 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 06:54:20.056278 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 06:54:20.056512 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 06:54:20.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.062638 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 06:54:20.063807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 06:54:20.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.066474 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 06:54:20.067551 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 06:54:20.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.069946 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 06:54:20.070070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 06:54:20.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.073875 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 06:54:20.073954 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 06:54:20.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.080735 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 06:54:20.080810 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 06:54:20.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.090039 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 06:54:20.090534 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 06:54:20.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.092351 systemd[1]: Stopped target network.target - Network. Jul 2 06:54:20.092417 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 06:54:20.092820 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:54:20.092996 systemd[1]: Stopped target paths.target - Path Units. Jul 2 06:54:20.093084 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 06:54:20.098723 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:54:20.100149 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 06:54:20.101920 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 06:54:20.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.105479 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 06:54:20.106442 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:54:20.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.108724 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 06:54:20.108780 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:54:20.111830 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 06:54:20.111906 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 06:54:20.115709 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 06:54:20.115781 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 06:54:20.118551 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 06:54:20.120848 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 06:54:20.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.143000 audit: BPF prog-id=6 op=UNLOAD Jul 2 06:54:20.128640 systemd-networkd[1121]: eth0: DHCPv6 lease lost Jul 2 06:54:20.144000 audit: BPF prog-id=9 op=UNLOAD Jul 2 06:54:20.136157 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 06:54:20.136267 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 06:54:20.141043 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 06:54:20.141222 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 06:54:20.144364 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 06:54:20.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.144398 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:54:20.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.150845 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 06:54:20.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.152128 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 06:54:20.152191 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:54:20.154664 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 06:54:20.154713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:54:20.167417 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 06:54:20.167475 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 06:54:20.176236 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 06:54:20.176295 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:54:20.184747 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:54:20.190841 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 2 06:54:20.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.190957 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 06:54:20.197221 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 06:54:20.197546 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:54:20.213912 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 06:54:20.214104 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 06:54:20.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.216411 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 06:54:20.216452 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 06:54:20.219346 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 06:54:20.219453 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:54:20.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.221562 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 06:54:20.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.221639 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:54:20.225401 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 06:54:20.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.225453 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 06:54:20.227594 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 06:54:20.227646 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:54:20.236795 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 06:54:20.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.237854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 06:54:20.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:20.237931 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 06:54:20.246100 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 06:54:20.246230 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 06:54:20.250436 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 06:54:20.256961 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 06:54:20.267663 systemd[1]: Switching root. Jul 2 06:54:20.293571 systemd-journald[180]: Journal stopped Jul 2 06:54:22.055086 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Jul 2 06:54:22.055160 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jul 2 06:54:22.055183 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 06:54:22.055204 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 06:54:22.055232 kernel: SELinux: policy capability open_perms=1 Jul 2 06:54:22.055251 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 06:54:22.055270 kernel: SELinux: policy capability always_check_network=0 Jul 2 06:54:22.055289 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 06:54:22.055308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 06:54:22.055328 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 06:54:22.055348 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 06:54:22.055372 systemd[1]: Successfully loaded SELinux policy in 88.726ms. Jul 2 06:54:22.055405 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.738ms. Jul 2 06:54:22.055429 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 06:54:22.055451 systemd[1]: Detected virtualization amazon. Jul 2 06:54:22.055471 systemd[1]: Detected architecture x86-64. Jul 2 06:54:22.055491 systemd[1]: Detected first boot. Jul 2 06:54:22.055512 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:54:22.055538 systemd[1]: Populated /etc with preset unit settings. Jul 2 06:54:22.055558 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 06:54:22.055626 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 06:54:22.055649 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 06:54:22.055670 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 06:54:22.055692 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 06:54:22.055713 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 06:54:22.055733 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 06:54:22.055754 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 06:54:22.055774 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 06:54:22.055796 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 06:54:22.055818 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 06:54:22.055838 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:54:22.055858 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 06:54:22.055879 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 06:54:22.055899 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 06:54:22.055919 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 06:54:22.056063 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 06:54:22.056086 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 06:54:22.056110 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 06:54:22.056131 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:54:22.056152 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:54:22.056173 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:54:22.056194 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:54:22.056215 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 06:54:22.056235 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 06:54:22.057433 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jul 2 06:54:22.057479 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:54:22.057501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:54:22.057522 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:54:22.057543 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 06:54:22.057564 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 06:54:22.057598 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 06:54:22.057619 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 06:54:22.057640 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:22.057665 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 06:54:22.057688 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 06:54:22.057709 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 06:54:22.057730 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 06:54:22.057750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:54:22.057772 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:54:22.057792 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 06:54:22.057812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:54:22.057832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:54:22.057854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:54:22.057877 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 06:54:22.057897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:54:22.057918 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 06:54:22.057938 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 06:54:22.057958 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 06:54:22.057978 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 06:54:22.057998 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 06:54:22.058018 systemd[1]: Stopped systemd-journald.service - Journal Service. Jul 2 06:54:22.058047 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:54:22.058067 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:54:22.058088 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 06:54:22.058108 kernel: fuse: init (API version 7.37) Jul 2 06:54:22.058128 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 06:54:22.058149 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:54:22.058165 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 06:54:22.058182 systemd[1]: Stopped verity-setup.service. Jul 2 06:54:22.058364 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:22.058391 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 06:54:22.058688 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 06:54:22.058716 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 06:54:22.058733 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 06:54:22.058751 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 06:54:22.058771 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 06:54:22.058793 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:54:22.058983 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 06:54:22.059005 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 06:54:22.059030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:54:22.059047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:54:22.059068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:54:22.059089 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:54:22.059110 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 06:54:22.059130 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 06:54:22.059149 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 06:54:22.059166 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 06:54:22.059190 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 06:54:22.059228 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 06:54:22.059247 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 06:54:22.059268 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 06:54:22.059288 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 06:54:22.059311 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:54:22.059329 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jul 2 06:54:22.059349 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 06:54:22.059367 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 06:54:22.059493 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:54:22.059516 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:54:22.059537 kernel: loop: module loaded Jul 2 06:54:22.059555 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:54:22.059622 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 06:54:22.059647 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:54:22.059665 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:54:22.059684 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:54:22.059725 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jul 2 06:54:22.059751 systemd-journald[1467]: Journal started Jul 2 06:54:22.059841 systemd-journald[1467]: Runtime Journal (/run/log/journal/ec267de58ff84e11569dd7e873a5a5f6) is 4.8M, max 38.6M, 33.8M free. Jul 2 06:54:20.649000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 06:54:20.853000 audit: BPF prog-id=10 op=LOAD Jul 2 06:54:20.853000 audit: BPF prog-id=10 op=UNLOAD Jul 2 06:54:20.853000 audit: BPF prog-id=11 op=LOAD Jul 2 06:54:20.853000 audit: BPF prog-id=11 op=UNLOAD Jul 2 06:54:21.552000 audit: BPF prog-id=12 op=LOAD Jul 2 06:54:21.552000 audit: BPF prog-id=3 op=UNLOAD Jul 2 06:54:21.552000 audit: BPF prog-id=13 op=LOAD Jul 2 06:54:21.552000 audit: BPF prog-id=14 op=LOAD Jul 2 06:54:21.552000 audit: BPF prog-id=4 op=UNLOAD Jul 2 06:54:21.552000 audit: BPF prog-id=5 op=UNLOAD Jul 2 06:54:21.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.557000 audit: BPF prog-id=12 op=UNLOAD Jul 2 06:54:21.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.795000 audit: BPF prog-id=15 op=LOAD Jul 2 06:54:21.795000 audit: BPF prog-id=16 op=LOAD Jul 2 06:54:21.795000 audit: BPF prog-id=17 op=LOAD Jul 2 06:54:21.797000 audit: BPF prog-id=14 op=UNLOAD Jul 2 06:54:21.797000 audit: BPF prog-id=13 op=UNLOAD Jul 2 06:54:21.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.052000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 06:54:22.052000 audit[1467]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffb13bb9f0 a2=4000 a3=7fffb13bba8c items=0 ppid=1 pid=1467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:22.052000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 06:54:22.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.543621 systemd[1]: Queued start job for default target multi-user.target. Jul 2 06:54:22.063525 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:54:21.543633 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 06:54:21.554221 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 06:54:22.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.063860 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 06:54:22.070878 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 06:54:22.073433 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:54:22.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.077597 kernel: ACPI: bus type drm_connector registered Jul 2 06:54:22.083651 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:54:22.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.083867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:54:22.093842 udevadm[1485]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 06:54:22.101371 systemd-journald[1467]: Time spent on flushing to /var/log/journal/ec267de58ff84e11569dd7e873a5a5f6 is 45.650ms for 1093 entries. Jul 2 06:54:22.101371 systemd-journald[1467]: System Journal (/var/log/journal/ec267de58ff84e11569dd7e873a5a5f6) is 8.0M, max 195.6M, 187.6M free. Jul 2 06:54:22.158197 systemd-journald[1467]: Received client request to flush runtime journal. Jul 2 06:54:22.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.159687 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 06:54:22.186876 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 06:54:22.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.196051 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 06:54:22.236881 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 06:54:22.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.932139 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 06:54:22.936623 kernel: kauditd_printk_skb: 87 callbacks suppressed Jul 2 06:54:22.936706 kernel: audit: type=1130 audit(1719903262.933:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.936736 kernel: audit: type=1334 audit(1719903262.934:130): prog-id=18 op=LOAD Jul 2 06:54:22.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:22.934000 audit: BPF prog-id=18 op=LOAD Jul 2 06:54:22.934000 audit: BPF prog-id=19 op=LOAD Jul 2 06:54:22.937760 kernel: audit: type=1334 audit(1719903262.934:131): prog-id=19 op=LOAD Jul 2 06:54:22.937790 kernel: audit: type=1334 audit(1719903262.934:132): prog-id=7 op=UNLOAD Jul 2 06:54:22.934000 audit: BPF prog-id=7 op=UNLOAD Jul 2 06:54:22.934000 audit: BPF prog-id=8 op=UNLOAD Jul 2 06:54:22.938995 kernel: audit: type=1334 audit(1719903262.934:133): prog-id=8 op=UNLOAD Jul 2 06:54:22.939895 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:54:22.983020 systemd-udevd[1508]: Using default interface naming scheme 'v252'. Jul 2 06:54:23.049718 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:54:23.059262 kernel: audit: type=1130 audit(1719903263.051:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:23.059358 kernel: audit: type=1334 audit(1719903263.052:135): prog-id=20 op=LOAD Jul 2 06:54:23.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:23.052000 audit: BPF prog-id=20 op=LOAD Jul 2 06:54:23.060815 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:54:23.081393 kernel: audit: type=1334 audit(1719903263.074:136): prog-id=21 op=LOAD Jul 2 06:54:23.081486 kernel: audit: type=1334 audit(1719903263.074:137): prog-id=22 op=LOAD Jul 2 06:54:23.081513 kernel: audit: type=1334 audit(1719903263.074:138): prog-id=23 op=LOAD Jul 2 06:54:23.074000 audit: BPF prog-id=21 op=LOAD Jul 2 06:54:23.074000 audit: BPF prog-id=22 op=LOAD Jul 2 06:54:23.074000 audit: BPF prog-id=23 op=LOAD Jul 2 06:54:23.079854 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 06:54:23.125486 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 06:54:23.135919 (udev-worker)[1515]: Network interface NamePolicy= disabled on kernel command line. Jul 2 06:54:23.155270 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 06:54:23.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:23.164606 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1521) Jul 2 06:54:23.257598 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 06:54:23.264782 kernel: ACPI: button: Power Button [PWRF] Jul 2 06:54:23.264890 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 2 06:54:23.265595 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 06:54:23.267340 systemd-networkd[1514]: lo: Link UP Jul 2 06:54:23.267357 systemd-networkd[1514]: lo: Gained carrier Jul 2 06:54:23.268928 systemd-networkd[1514]: Enumeration completed Jul 2 06:54:23.269076 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:54:23.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:23.271469 systemd-networkd[1514]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:54:23.271479 systemd-networkd[1514]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 06:54:23.275843 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 06:54:23.278661 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 06:54:23.279467 systemd-networkd[1514]: eth0: Link UP Jul 2 06:54:23.279767 systemd-networkd[1514]: eth0: Gained carrier Jul 2 06:54:23.279791 systemd-networkd[1514]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:54:23.292791 systemd-networkd[1514]: eth0: DHCPv4 address 172.31.18.4/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 06:54:23.347615 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jul 2 06:54:23.357615 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jul 2 06:54:23.405658 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 06:54:23.411698 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1515) Jul 2 06:54:23.623082 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 06:54:23.626326 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 06:54:23.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:23.635926 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 06:54:23.675309 lvm[1623]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:54:23.723056 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 06:54:23.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:23.726081 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:54:23.737928 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 06:54:23.744386 lvm[1624]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:54:23.776516 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 06:54:23.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:23.778165 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:54:23.780256 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 06:54:23.780299 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:54:23.781872 systemd[1]: Reached target machines.target - Containers. Jul 2 06:54:23.793821 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 06:54:23.795167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:54:23.795234 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:54:23.797136 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jul 2 06:54:23.800169 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 06:54:23.805224 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 06:54:23.809248 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 06:54:23.818818 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1626 (bootctl) Jul 2 06:54:23.829817 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jul 2 06:54:23.840605 kernel: loop0: detected capacity change from 0 to 60984 Jul 2 06:54:23.859526 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 06:54:23.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:23.993107 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 06:54:24.016610 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 06:54:24.113507 systemd-fsck[1634]: fsck.fat 4.2 (2021-01-31) Jul 2 06:54:24.113507 systemd-fsck[1634]: /dev/nvme0n1p1: 808 files, 120378/258078 clusters Jul 2 06:54:24.119653 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jul 2 06:54:24.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:24.132806 systemd[1]: Mounting boot.mount - Boot partition... Jul 2 06:54:24.186411 systemd[1]: Mounted boot.mount - Boot partition. Jul 2 06:54:24.213520 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jul 2 06:54:24.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:24.280611 kernel: loop2: detected capacity change from 0 to 80600 Jul 2 06:54:24.329912 systemd-networkd[1514]: eth0: Gained IPv6LL Jul 2 06:54:24.350074 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 06:54:24.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:24.430852 kernel: loop3: detected capacity change from 0 to 139360 Jul 2 06:54:24.606834 kernel: loop4: detected capacity change from 0 to 60984 Jul 2 06:54:24.619605 kernel: loop5: detected capacity change from 0 to 210664 Jul 2 06:54:24.641608 kernel: loop6: detected capacity change from 0 to 80600 Jul 2 06:54:24.654609 kernel: loop7: detected capacity change from 0 to 139360 Jul 2 06:54:24.671451 (sd-sysext)[1653]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 06:54:24.672206 (sd-sysext)[1653]: Merged extensions into '/usr'. Jul 2 06:54:24.675937 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 06:54:24.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:24.685839 systemd[1]: Starting ensure-sysext.service... Jul 2 06:54:24.688793 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:54:24.738808 systemd[1]: Reloading. Jul 2 06:54:24.745623 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 06:54:24.751432 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 06:54:24.755209 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 06:54:24.771817 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 06:54:25.168168 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:54:25.283573 ldconfig[1625]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 06:54:25.284000 audit: BPF prog-id=24 op=LOAD Jul 2 06:54:25.285000 audit: BPF prog-id=15 op=UNLOAD Jul 2 06:54:25.285000 audit: BPF prog-id=25 op=LOAD Jul 2 06:54:25.285000 audit: BPF prog-id=26 op=LOAD Jul 2 06:54:25.285000 audit: BPF prog-id=16 op=UNLOAD Jul 2 06:54:25.285000 audit: BPF prog-id=17 op=UNLOAD Jul 2 06:54:25.287000 audit: BPF prog-id=27 op=LOAD Jul 2 06:54:25.287000 audit: BPF prog-id=20 op=UNLOAD Jul 2 06:54:25.287000 audit: BPF prog-id=28 op=LOAD Jul 2 06:54:25.287000 audit: BPF prog-id=21 op=UNLOAD Jul 2 06:54:25.287000 audit: BPF prog-id=29 op=LOAD Jul 2 06:54:25.287000 audit: BPF prog-id=30 op=LOAD Jul 2 06:54:25.287000 audit: BPF prog-id=22 op=UNLOAD Jul 2 06:54:25.287000 audit: BPF prog-id=23 op=UNLOAD Jul 2 06:54:25.289000 audit: BPF prog-id=31 op=LOAD Jul 2 06:54:25.289000 audit: BPF prog-id=32 op=LOAD Jul 2 06:54:25.289000 audit: BPF prog-id=18 op=UNLOAD Jul 2 06:54:25.289000 audit: BPF prog-id=19 op=UNLOAD Jul 2 06:54:25.295013 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 06:54:25.296378 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 06:54:25.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.301588 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 06:54:25.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.303137 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:54:25.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.312319 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 06:54:25.331007 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 06:54:25.334646 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 06:54:25.336000 audit: BPF prog-id=33 op=LOAD Jul 2 06:54:25.340000 audit: BPF prog-id=34 op=LOAD Jul 2 06:54:25.338797 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:54:25.346881 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 06:54:25.354946 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 06:54:25.363466 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.365669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:54:25.372000 audit[1738]: SYSTEM_BOOT pid=1738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.368829 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:54:25.378782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:54:25.385980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:54:25.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.387368 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:54:25.387688 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:54:25.387869 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.389508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:54:25.389760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:54:25.392527 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 06:54:25.394740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:54:25.394905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:54:25.406144 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:54:25.406342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:54:25.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.408908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:54:25.409158 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:54:25.412105 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 06:54:25.419250 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.419728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:54:25.428264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:54:25.432309 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:54:25.439009 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:54:25.440339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:54:25.440571 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:54:25.440792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.442374 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 06:54:25.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.444391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:54:25.444592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:54:25.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.449909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:54:25.456716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.457212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:54:25.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.467092 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:54:25.481928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:54:25.483707 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:54:25.483953 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:54:25.484150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.485522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:54:25.485824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:54:25.488001 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:54:25.488319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:54:25.490207 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 06:54:25.492156 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:54:25.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.503187 systemd[1]: Finished ensure-sysext.service. Jul 2 06:54:25.510030 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:54:25.510210 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:54:25.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.513098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:54:25.513286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:54:25.516903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:54:25.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:25.516000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 06:54:25.516000 audit[1753]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9a2733b0 a2=420 a3=0 items=0 ppid=1724 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:25.516000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 06:54:25.517421 augenrules[1753]: No rules Jul 2 06:54:25.518572 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 06:54:25.535973 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 06:54:25.537332 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 06:54:25.585170 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 06:54:25.586655 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 06:54:25.606150 systemd-resolved[1736]: Positive Trust Anchors: Jul 2 06:54:25.606274 systemd-resolved[1736]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:54:25.606315 systemd-resolved[1736]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 06:54:25.611450 systemd-resolved[1736]: Defaulting to hostname 'linux'. Jul 2 06:54:25.613621 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:54:25.614941 systemd[1]: Reached target network.target - Network. Jul 2 06:54:25.616191 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 06:54:25.617259 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:54:25.618447 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:54:25.620082 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 06:54:25.621747 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 06:54:25.626277 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 06:54:25.632639 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 06:54:25.635497 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 06:54:25.638483 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 06:54:25.638534 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:54:25.641815 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:54:25.648481 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 06:54:25.653900 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 06:54:25.662357 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 06:54:25.663614 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:54:25.664396 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 06:54:25.665705 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:54:25.666715 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:54:25.667696 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:54:25.667732 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:54:25.675790 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 06:54:25.680410 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 06:54:25.683789 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 06:54:25.686881 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 06:54:25.693040 jq[1764]: false Jul 2 06:54:25.697811 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 06:54:25.701595 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 06:54:25.705404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:54:25.709705 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 06:54:26.557977 systemd-timesyncd[1737]: Contacted time server 71.162.136.44:123 (0.flatcar.pool.ntp.org). Jul 2 06:54:26.558049 systemd-timesyncd[1737]: Initial clock synchronization to Tue 2024-07-02 06:54:26.557846 UTC. Jul 2 06:54:26.558189 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 06:54:26.561308 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 06:54:26.578675 systemd-resolved[1736]: Clock change detected. Flushing caches. Jul 2 06:54:26.583671 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 06:54:26.587866 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 06:54:26.593313 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 06:54:26.623502 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 06:54:26.624906 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:54:26.624995 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 06:54:26.625691 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 06:54:26.627611 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 06:54:26.638112 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 06:54:26.643636 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 06:54:26.643924 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 06:54:26.646795 jq[1783]: true Jul 2 06:54:26.656260 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 06:54:26.656596 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 06:54:26.674755 dbus-daemon[1763]: [system] SELinux support is enabled Jul 2 06:54:26.674995 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 06:54:26.679411 dbus-daemon[1763]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1514 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 06:54:26.680177 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 06:54:26.680219 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 06:54:26.681555 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 06:54:26.681586 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 06:54:26.710942 tar[1786]: linux-amd64/helm Jul 2 06:54:26.723531 dbus-daemon[1763]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 06:54:26.726387 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 06:54:26.744010 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 06:54:26.761198 jq[1788]: true Jul 2 06:54:26.769989 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 06:54:26.786613 update_engine[1780]: I0702 06:54:26.778955 1780 main.cc:92] Flatcar Update Engine starting Jul 2 06:54:26.786613 update_engine[1780]: I0702 06:54:26.784636 1780 update_check_scheduler.cc:74] Next update check in 2m2s Jul 2 06:54:26.780857 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 06:54:26.784352 systemd[1]: Started update-engine.service - Update Engine. Jul 2 06:54:26.788531 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 06:54:26.799028 extend-filesystems[1765]: Found loop4 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found loop5 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found loop6 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found loop7 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found nvme0n1 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found nvme0n1p1 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found nvme0n1p2 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found nvme0n1p3 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found usr Jul 2 06:54:26.799028 extend-filesystems[1765]: Found nvme0n1p4 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found nvme0n1p6 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found nvme0n1p7 Jul 2 06:54:26.799028 extend-filesystems[1765]: Found nvme0n1p9 Jul 2 06:54:26.799028 extend-filesystems[1765]: Checking size of /dev/nvme0n1p9 Jul 2 06:54:26.876269 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 06:54:26.877711 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 06:54:26.900703 extend-filesystems[1765]: Resized partition /dev/nvme0n1p9 Jul 2 06:54:26.983763 extend-filesystems[1821]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 06:54:27.003532 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 06:54:27.109511 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 06:54:27.138100 dbus-daemon[1763]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 06:54:27.138288 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 06:54:27.138914 dbus-daemon[1763]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1801 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 06:54:27.152754 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 06:54:27.180254 extend-filesystems[1821]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 06:54:27.180254 extend-filesystems[1821]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 06:54:27.180254 extend-filesystems[1821]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 06:54:27.189967 extend-filesystems[1765]: Resized filesystem in /dev/nvme0n1p9 Jul 2 06:54:27.191902 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 06:54:27.192311 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 06:54:27.196114 bash[1824]: Updated "/home/core/.ssh/authorized_keys" Jul 2 06:54:27.197193 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 06:54:27.207361 systemd[1]: Starting sshkeys.service... Jul 2 06:54:27.258827 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 06:54:27.267343 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 06:54:27.288648 polkitd[1829]: Started polkitd version 121 Jul 2 06:54:27.342337 polkitd[1829]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 06:54:27.342438 polkitd[1829]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 06:54:27.353542 polkitd[1829]: Finished loading, compiling and executing 2 rules Jul 2 06:54:27.354264 dbus-daemon[1763]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 06:54:27.354449 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 06:54:27.355718 systemd-logind[1779]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 06:54:27.360608 systemd-logind[1779]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 06:54:27.360817 systemd-logind[1779]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 06:54:27.362656 systemd-logind[1779]: New seat seat0. Jul 2 06:54:27.369055 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 06:54:27.372844 polkitd[1829]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 06:54:27.435680 amazon-ssm-agent[1802]: Initializing new seelog logger Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: New Seelog Logger Creation Complete Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: 2024/07/02 06:54:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: 2024/07/02 06:54:27 processing appconfig overrides Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: 2024/07/02 06:54:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: 2024/07/02 06:54:27 processing appconfig overrides Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: 2024/07/02 06:54:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: 2024/07/02 06:54:27 processing appconfig overrides Jul 2 06:54:27.457543 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO Proxy environment variables: Jul 2 06:54:27.473248 systemd-hostnamed[1801]: Hostname set to (transient) Jul 2 06:54:27.474305 systemd-resolved[1736]: System hostname changed to 'ip-172-31-18-4'. Jul 2 06:54:27.498074 amazon-ssm-agent[1802]: 2024/07/02 06:54:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 06:54:27.498227 amazon-ssm-agent[1802]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 06:54:27.498469 amazon-ssm-agent[1802]: 2024/07/02 06:54:27 processing appconfig overrides Jul 2 06:54:27.500348 coreos-metadata[1762]: Jul 02 06:54:27.497 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 06:54:27.501722 coreos-metadata[1762]: Jul 02 06:54:27.500 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 06:54:27.502654 coreos-metadata[1762]: Jul 02 06:54:27.502 INFO Fetch successful Jul 2 06:54:27.502917 coreos-metadata[1762]: Jul 02 06:54:27.502 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 06:54:27.503053 coreos-metadata[1762]: Jul 02 06:54:27.502 INFO Fetch successful Jul 2 06:54:27.503265 coreos-metadata[1762]: Jul 02 06:54:27.503 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 06:54:27.503443 coreos-metadata[1762]: Jul 02 06:54:27.503 INFO Fetch successful Jul 2 06:54:27.506404 coreos-metadata[1762]: Jul 02 06:54:27.503 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 06:54:27.507597 coreos-metadata[1762]: Jul 02 06:54:27.506 INFO Fetch successful Jul 2 06:54:27.507789 coreos-metadata[1762]: Jul 02 06:54:27.507 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 06:54:27.507897 coreos-metadata[1762]: Jul 02 06:54:27.507 INFO Fetch failed with 404: resource not found Jul 2 06:54:27.507995 coreos-metadata[1762]: Jul 02 06:54:27.507 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 06:54:27.508187 coreos-metadata[1762]: Jul 02 06:54:27.508 INFO Fetch successful Jul 2 06:54:27.511897 coreos-metadata[1762]: Jul 02 06:54:27.509 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 06:54:27.512052 coreos-metadata[1762]: Jul 02 06:54:27.511 INFO Fetch successful Jul 2 06:54:27.512158 coreos-metadata[1762]: Jul 02 06:54:27.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 06:54:27.512382 coreos-metadata[1762]: Jul 02 06:54:27.512 INFO Fetch successful Jul 2 06:54:27.512533 coreos-metadata[1762]: Jul 02 06:54:27.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 06:54:27.512638 coreos-metadata[1762]: Jul 02 06:54:27.512 INFO Fetch successful Jul 2 06:54:27.512738 coreos-metadata[1762]: Jul 02 06:54:27.512 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 06:54:27.512846 coreos-metadata[1762]: Jul 02 06:54:27.512 INFO Fetch successful Jul 2 06:54:27.579193 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO https_proxy: Jul 2 06:54:27.579281 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 06:54:27.581557 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 06:54:27.733113 coreos-metadata[1832]: Jul 02 06:54:27.732 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 06:54:27.739521 coreos-metadata[1832]: Jul 02 06:54:27.735 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 06:54:27.739521 coreos-metadata[1832]: Jul 02 06:54:27.735 INFO Fetch successful Jul 2 06:54:27.739521 coreos-metadata[1832]: Jul 02 06:54:27.735 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 06:54:27.739521 coreos-metadata[1832]: Jul 02 06:54:27.736 INFO Fetch successful Jul 2 06:54:27.739780 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO http_proxy: Jul 2 06:54:27.739897 unknown[1832]: wrote ssh authorized keys file for user: core Jul 2 06:54:27.760513 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1847) Jul 2 06:54:27.779394 update-ssh-keys[1873]: Updated "/home/core/.ssh/authorized_keys" Jul 2 06:54:27.780580 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 06:54:27.790479 systemd[1]: Finished sshkeys.service. Jul 2 06:54:27.807762 locksmithd[1804]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 06:54:27.838594 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO no_proxy: Jul 2 06:54:27.943604 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO Checking if agent identity type OnPrem can be assumed Jul 2 06:54:28.028523 containerd[1789]: time="2024-07-02T06:54:28.028334922Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jul 2 06:54:28.041811 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO Checking if agent identity type EC2 can be assumed Jul 2 06:54:28.164738 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO Agent will take identity from EC2 Jul 2 06:54:28.263267 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 06:54:28.264466 containerd[1789]: time="2024-07-02T06:54:28.264417683Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 06:54:28.264593 containerd[1789]: time="2024-07-02T06:54:28.264528060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:28.267785 containerd[1789]: time="2024-07-02T06:54:28.267732689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:54:28.267785 containerd[1789]: time="2024-07-02T06:54:28.267784524Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:28.268199 containerd[1789]: time="2024-07-02T06:54:28.268104466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:54:28.268287 containerd[1789]: time="2024-07-02T06:54:28.268200005Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 06:54:28.268335 containerd[1789]: time="2024-07-02T06:54:28.268315146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:28.268407 containerd[1789]: time="2024-07-02T06:54:28.268384364Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:54:28.268453 containerd[1789]: time="2024-07-02T06:54:28.268409888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:28.268545 containerd[1789]: time="2024-07-02T06:54:28.268525306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:28.268812 containerd[1789]: time="2024-07-02T06:54:28.268788446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:28.268897 containerd[1789]: time="2024-07-02T06:54:28.268819658Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 06:54:28.268897 containerd[1789]: time="2024-07-02T06:54:28.268855990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:28.269073 containerd[1789]: time="2024-07-02T06:54:28.269046452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:54:28.269121 containerd[1789]: time="2024-07-02T06:54:28.269075575Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 06:54:28.269163 containerd[1789]: time="2024-07-02T06:54:28.269147067Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 06:54:28.269206 containerd[1789]: time="2024-07-02T06:54:28.269163383Z" level=info msg="metadata content store policy set" policy=shared Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.310865925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.310929876Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.310953380Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311011411Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311073756Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311091781Z" level=info msg="NRI interface is disabled by configuration." Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311112308Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311280517Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311303744Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311444490Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311470502Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311515011Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311543359Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 06:54:28.322879 containerd[1789]: time="2024-07-02T06:54:28.311565792Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.311586202Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.311609574Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.311630918Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.311653709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.311672657Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.311824985Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.319272316Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.319347441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.319369516Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 06:54:28.327696 containerd[1789]: time="2024-07-02T06:54:28.319404124Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.328298429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.328427718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.329639128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.329664274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.329685008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.329764546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.329824696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.329844690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.329866237Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.330103528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.330130161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.330149577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.330346959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.330368566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.330684 containerd[1789]: time="2024-07-02T06:54:28.330391514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.331717 containerd[1789]: time="2024-07-02T06:54:28.330411362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.331717 containerd[1789]: time="2024-07-02T06:54:28.330429166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 06:54:28.332097 containerd[1789]: time="2024-07-02T06:54:28.332009838Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 06:54:28.332405 containerd[1789]: time="2024-07-02T06:54:28.332386047Z" level=info msg="Connect containerd service" Jul 2 06:54:28.332602 containerd[1789]: time="2024-07-02T06:54:28.332584112Z" level=info msg="using legacy CRI server" Jul 2 06:54:28.333157 containerd[1789]: time="2024-07-02T06:54:28.332675719Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 06:54:28.333157 containerd[1789]: time="2024-07-02T06:54:28.332765210Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 06:54:28.333895 containerd[1789]: time="2024-07-02T06:54:28.333846614Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 06:54:28.334902 containerd[1789]: time="2024-07-02T06:54:28.334869361Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 06:54:28.339250 containerd[1789]: time="2024-07-02T06:54:28.339173019Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 06:54:28.339476 containerd[1789]: time="2024-07-02T06:54:28.339449932Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 06:54:28.339783 containerd[1789]: time="2024-07-02T06:54:28.339744520Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jul 2 06:54:28.341405 containerd[1789]: time="2024-07-02T06:54:28.339107543Z" level=info msg="Start subscribing containerd event" Jul 2 06:54:28.341582 containerd[1789]: time="2024-07-02T06:54:28.341563604Z" level=info msg="Start recovering state" Jul 2 06:54:28.341774 containerd[1789]: time="2024-07-02T06:54:28.341758458Z" level=info msg="Start event monitor" Jul 2 06:54:28.341866 containerd[1789]: time="2024-07-02T06:54:28.341852047Z" level=info msg="Start snapshots syncer" Jul 2 06:54:28.341993 containerd[1789]: time="2024-07-02T06:54:28.341956617Z" level=info msg="Start cni network conf syncer for default" Jul 2 06:54:28.342088 containerd[1789]: time="2024-07-02T06:54:28.342075136Z" level=info msg="Start streaming server" Jul 2 06:54:28.342918 containerd[1789]: time="2024-07-02T06:54:28.342897603Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 06:54:28.352664 containerd[1789]: time="2024-07-02T06:54:28.352620393Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 06:54:28.353508 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 06:54:28.354143 containerd[1789]: time="2024-07-02T06:54:28.354117923Z" level=info msg="containerd successfully booted in 0.327103s" Jul 2 06:54:28.367963 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 06:54:28.462225 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 06:54:28.561420 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 06:54:28.662428 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 2 06:54:28.761683 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 06:54:28.865935 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 06:54:28.967058 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO [Registrar] Starting registrar module Jul 2 06:54:29.067837 amazon-ssm-agent[1802]: 2024-07-02 06:54:27 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 06:54:29.277073 tar[1786]: linux-amd64/LICENSE Jul 2 06:54:29.277791 tar[1786]: linux-amd64/README.md Jul 2 06:54:29.308628 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 06:54:29.344978 amazon-ssm-agent[1802]: 2024-07-02 06:54:29 INFO [EC2Identity] EC2 registration was successful. Jul 2 06:54:29.380327 amazon-ssm-agent[1802]: 2024-07-02 06:54:29 INFO [CredentialRefresher] credentialRefresher has started Jul 2 06:54:29.380327 amazon-ssm-agent[1802]: 2024-07-02 06:54:29 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 06:54:29.380327 amazon-ssm-agent[1802]: 2024-07-02 06:54:29 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 06:54:29.445515 amazon-ssm-agent[1802]: 2024-07-02 06:54:29 INFO [CredentialRefresher] Next credential rotation will be in 32.333325640666665 minutes Jul 2 06:54:29.456074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:54:30.413611 amazon-ssm-agent[1802]: 2024-07-02 06:54:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 06:54:30.436895 kubelet[1971]: E0702 06:54:30.436851 1971 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:54:30.439406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:54:30.439610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:54:30.439945 systemd[1]: kubelet.service: Consumed 1.179s CPU time. Jul 2 06:54:30.515073 amazon-ssm-agent[1802]: 2024-07-02 06:54:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:1978) started Jul 2 06:54:30.615585 amazon-ssm-agent[1802]: 2024-07-02 06:54:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 06:54:31.143808 sshd_keygen[1805]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 06:54:31.173782 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 06:54:31.180011 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 06:54:31.188141 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 06:54:31.188366 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 06:54:31.195135 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 06:54:31.207788 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 06:54:31.213100 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 06:54:31.216597 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 06:54:31.218359 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 06:54:31.219585 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 06:54:31.228133 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jul 2 06:54:31.240027 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 06:54:31.240255 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jul 2 06:54:31.241616 systemd[1]: Startup finished in 808ms (kernel) + 7.915s (initrd) + 9.831s (userspace) = 18.556s. Jul 2 06:54:35.330192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 06:54:35.339264 systemd[1]: Started sshd@0-172.31.18.4:22-139.178.89.65:51946.service - OpenSSH per-connection server daemon (139.178.89.65:51946). Jul 2 06:54:35.523511 sshd[2004]: Accepted publickey for core from 139.178.89.65 port 51946 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:54:35.526557 sshd[2004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:35.539315 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 06:54:35.549165 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 06:54:35.555767 systemd-logind[1779]: New session 1 of user core. Jul 2 06:54:35.569792 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 06:54:35.584249 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 06:54:35.601956 (systemd)[2007]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:35.765547 systemd[2007]: Queued start job for default target default.target. Jul 2 06:54:35.777072 systemd[2007]: Reached target paths.target - Paths. Jul 2 06:54:35.777200 systemd[2007]: Reached target sockets.target - Sockets. Jul 2 06:54:35.777233 systemd[2007]: Reached target timers.target - Timers. Jul 2 06:54:35.777250 systemd[2007]: Reached target basic.target - Basic System. Jul 2 06:54:35.777384 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 06:54:35.779421 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 06:54:35.780352 systemd[2007]: Reached target default.target - Main User Target. Jul 2 06:54:35.780594 systemd[2007]: Startup finished in 162ms. Jul 2 06:54:35.922623 systemd[1]: Started sshd@1-172.31.18.4:22-139.178.89.65:51950.service - OpenSSH per-connection server daemon (139.178.89.65:51950). Jul 2 06:54:36.086820 sshd[2016]: Accepted publickey for core from 139.178.89.65 port 51950 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:54:36.088595 sshd[2016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:36.093260 systemd-logind[1779]: New session 2 of user core. Jul 2 06:54:36.101730 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 06:54:36.223672 sshd[2016]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:36.227561 systemd[1]: sshd@1-172.31.18.4:22-139.178.89.65:51950.service: Deactivated successfully. Jul 2 06:54:36.228433 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 06:54:36.229105 systemd-logind[1779]: Session 2 logged out. Waiting for processes to exit. Jul 2 06:54:36.229994 systemd-logind[1779]: Removed session 2. Jul 2 06:54:36.254071 systemd[1]: Started sshd@2-172.31.18.4:22-139.178.89.65:51960.service - OpenSSH per-connection server daemon (139.178.89.65:51960). Jul 2 06:54:36.413511 sshd[2022]: Accepted publickey for core from 139.178.89.65 port 51960 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:54:36.415042 sshd[2022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:36.421044 systemd-logind[1779]: New session 3 of user core. Jul 2 06:54:36.427745 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 06:54:36.540839 sshd[2022]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:36.544833 systemd[1]: sshd@2-172.31.18.4:22-139.178.89.65:51960.service: Deactivated successfully. Jul 2 06:54:36.545971 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 06:54:36.546835 systemd-logind[1779]: Session 3 logged out. Waiting for processes to exit. Jul 2 06:54:36.547811 systemd-logind[1779]: Removed session 3. Jul 2 06:54:36.577078 systemd[1]: Started sshd@3-172.31.18.4:22-139.178.89.65:51970.service - OpenSSH per-connection server daemon (139.178.89.65:51970). Jul 2 06:54:36.748577 sshd[2028]: Accepted publickey for core from 139.178.89.65 port 51970 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:54:36.750054 sshd[2028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:36.755559 systemd-logind[1779]: New session 4 of user core. Jul 2 06:54:36.760865 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 06:54:36.883350 sshd[2028]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:36.886860 systemd[1]: sshd@3-172.31.18.4:22-139.178.89.65:51970.service: Deactivated successfully. Jul 2 06:54:36.887672 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 06:54:36.888938 systemd-logind[1779]: Session 4 logged out. Waiting for processes to exit. Jul 2 06:54:36.889883 systemd-logind[1779]: Removed session 4. Jul 2 06:54:36.931063 systemd[1]: Started sshd@4-172.31.18.4:22-139.178.89.65:51982.service - OpenSSH per-connection server daemon (139.178.89.65:51982). Jul 2 06:54:37.099371 sshd[2034]: Accepted publickey for core from 139.178.89.65 port 51982 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:54:37.101009 sshd[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:37.106748 systemd-logind[1779]: New session 5 of user core. Jul 2 06:54:37.116750 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 06:54:37.259027 sudo[2037]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 06:54:37.259424 sudo[2037]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:54:37.284033 sudo[2037]: pam_unix(sudo:session): session closed for user root Jul 2 06:54:37.308940 sshd[2034]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:37.317426 systemd[1]: sshd@4-172.31.18.4:22-139.178.89.65:51982.service: Deactivated successfully. Jul 2 06:54:37.319995 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 06:54:37.323439 systemd-logind[1779]: Session 5 logged out. Waiting for processes to exit. Jul 2 06:54:37.328540 systemd-logind[1779]: Removed session 5. Jul 2 06:54:37.356059 systemd[1]: Started sshd@5-172.31.18.4:22-139.178.89.65:51992.service - OpenSSH per-connection server daemon (139.178.89.65:51992). Jul 2 06:54:37.555656 sshd[2041]: Accepted publickey for core from 139.178.89.65 port 51992 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:54:37.558180 sshd[2041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:37.572569 systemd-logind[1779]: New session 6 of user core. Jul 2 06:54:37.575806 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 06:54:37.703411 sudo[2045]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 06:54:37.703798 sudo[2045]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:54:37.728669 sudo[2045]: pam_unix(sudo:session): session closed for user root Jul 2 06:54:37.744577 sudo[2044]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 06:54:37.744974 sudo[2044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:54:37.791060 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 06:54:37.792000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 06:54:37.795165 auditctl[2048]: No rules Jul 2 06:54:37.796128 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 2 06:54:37.796983 kernel: audit: type=1305 audit(1719903277.792:194): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 06:54:37.792000 audit[2048]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0ac40490 a2=420 a3=0 items=0 ppid=1 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:37.798358 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 06:54:37.798601 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 06:54:37.816899 kernel: audit: type=1300 audit(1719903277.792:194): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0ac40490 a2=420 a3=0 items=0 ppid=1 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:37.817024 kernel: audit: type=1327 audit(1719903277.792:194): proctitle=2F7362696E2F617564697463746C002D44 Jul 2 06:54:37.817052 kernel: audit: type=1131 audit(1719903277.797:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.792000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 2 06:54:37.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.812993 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 06:54:37.851405 augenrules[2065]: No rules Jul 2 06:54:37.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.852093 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 06:54:37.855264 sudo[2044]: pam_unix(sudo:session): session closed for user root Jul 2 06:54:37.855529 kernel: audit: type=1130 audit(1719903277.850:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.854000 audit[2044]: USER_END pid=2044 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.854000 audit[2044]: CRED_DISP pid=2044 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.861266 kernel: audit: type=1106 audit(1719903277.854:197): pid=2044 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.861310 kernel: audit: type=1104 audit(1719903277.854:198): pid=2044 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.878368 sshd[2041]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:37.878000 audit[2041]: USER_END pid=2041 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:54:37.878000 audit[2041]: CRED_DISP pid=2041 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:54:37.882952 systemd[1]: sshd@5-172.31.18.4:22-139.178.89.65:51992.service: Deactivated successfully. Jul 2 06:54:37.883953 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 06:54:37.885499 kernel: audit: type=1106 audit(1719903277.878:199): pid=2041 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:54:37.885578 kernel: audit: type=1104 audit(1719903277.878:200): pid=2041 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:54:37.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.4:22-139.178.89.65:51992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.885687 systemd-logind[1779]: Session 6 logged out. Waiting for processes to exit. Jul 2 06:54:37.886929 systemd-logind[1779]: Removed session 6. Jul 2 06:54:37.888762 kernel: audit: type=1131 audit(1719903277.881:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.4:22-139.178.89.65:51992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.4:22-139.178.89.65:51994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:37.927061 systemd[1]: Started sshd@6-172.31.18.4:22-139.178.89.65:51994.service - OpenSSH per-connection server daemon (139.178.89.65:51994). Jul 2 06:54:38.108000 audit[2071]: USER_ACCT pid=2071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:54:38.110028 sshd[2071]: Accepted publickey for core from 139.178.89.65 port 51994 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:54:38.109000 audit[2071]: CRED_ACQ pid=2071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:54:38.109000 audit[2071]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc965bab0 a2=3 a3=7fba238d1480 items=0 ppid=1 pid=2071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:38.109000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:38.111560 sshd[2071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:38.117044 systemd-logind[1779]: New session 7 of user core. Jul 2 06:54:38.126750 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 06:54:38.131000 audit[2071]: USER_START pid=2071 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:54:38.132000 audit[2073]: CRED_ACQ pid=2073 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:54:38.229000 audit[2074]: USER_ACCT pid=2074 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:54:38.231202 sudo[2074]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 06:54:38.229000 audit[2074]: CRED_REFR pid=2074 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:54:38.231655 sudo[2074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:54:38.233000 audit[2074]: USER_START pid=2074 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:54:38.422020 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 06:54:38.994982 dockerd[2083]: time="2024-07-02T06:54:38.994918310Z" level=info msg="Starting up" Jul 2 06:54:39.066396 systemd[1]: var-lib-docker-metacopy\x2dcheck1211556704-merged.mount: Deactivated successfully. Jul 2 06:54:39.098126 dockerd[2083]: time="2024-07-02T06:54:39.098053657Z" level=info msg="Loading containers: start." Jul 2 06:54:39.224000 audit[2115]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.224000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff7d642060 a2=0 a3=7f05ab599e90 items=0 ppid=2083 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.224000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 2 06:54:39.227000 audit[2117]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.227000 audit[2117]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd8fa80690 a2=0 a3=7ff1d2cc3e90 items=0 ppid=2083 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.227000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 2 06:54:39.229000 audit[2119]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.229000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcac1bbe60 a2=0 a3=7fc1cbcaae90 items=0 ppid=2083 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.229000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 06:54:39.232000 audit[2121]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.232000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe5d554dd0 a2=0 a3=7feafbe95e90 items=0 ppid=2083 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.232000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 06:54:39.236000 audit[2123]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.236000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc27194600 a2=0 a3=7fa9dec10e90 items=0 ppid=2083 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.236000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 2 06:54:39.239000 audit[2125]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.239000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff71aea1a0 a2=0 a3=7f433c1e8e90 items=0 ppid=2083 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.239000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 2 06:54:39.252000 audit[2127]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.252000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd46b086e0 a2=0 a3=7fdc75776e90 items=0 ppid=2083 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.252000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 2 06:54:39.255000 audit[2129]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.255000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffec59d5ac0 a2=0 a3=7fd01bc6ce90 items=0 ppid=2083 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.255000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 2 06:54:39.258000 audit[2131]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.258000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd41529040 a2=0 a3=7f0521ca6e90 items=0 ppid=2083 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.258000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:54:39.270000 audit[2135]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.270000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcbfb52d80 a2=0 a3=7f6509939e90 items=0 ppid=2083 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.270000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:54:39.271000 audit[2136]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.271000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd3c88ca10 a2=0 a3=7f27fad10e90 items=0 ppid=2083 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.271000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:54:39.284732 kernel: Initializing XFRM netlink socket Jul 2 06:54:39.336705 (udev-worker)[2095]: Network interface NamePolicy= disabled on kernel command line. Jul 2 06:54:39.396000 audit[2144]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.396000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc441ef240 a2=0 a3=7fa0aa328e90 items=0 ppid=2083 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.396000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 2 06:54:39.461000 audit[2147]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.461000 audit[2147]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd1155d560 a2=0 a3=7f28138f8e90 items=0 ppid=2083 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.461000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 2 06:54:39.467000 audit[2151]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.467000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe4f12bb70 a2=0 a3=7f2cfc251e90 items=0 ppid=2083 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.467000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 2 06:54:39.469000 audit[2153]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.469000 audit[2153]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffed9f7cf10 a2=0 a3=7fb8bba6ee90 items=0 ppid=2083 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.469000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 2 06:54:39.472000 audit[2155]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.472000 audit[2155]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd0c908450 a2=0 a3=7fe179c8fe90 items=0 ppid=2083 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.472000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 2 06:54:39.475000 audit[2157]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.475000 audit[2157]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc9b278300 a2=0 a3=7f0739928e90 items=0 ppid=2083 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.475000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 2 06:54:39.477000 audit[2159]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.477000 audit[2159]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffff5669f90 a2=0 a3=7f1f1e402e90 items=0 ppid=2083 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.477000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 2 06:54:39.486000 audit[2162]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.486000 audit[2162]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe40d02160 a2=0 a3=7f56e2696e90 items=0 ppid=2083 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.486000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 2 06:54:39.489000 audit[2164]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.489000 audit[2164]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd1fa3e010 a2=0 a3=7f23f3b22e90 items=0 ppid=2083 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.489000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 06:54:39.492000 audit[2166]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.492000 audit[2166]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe61bc1c40 a2=0 a3=7f1a55e5be90 items=0 ppid=2083 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.492000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 06:54:39.494000 audit[2168]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.494000 audit[2168]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff5ee48a60 a2=0 a3=7f363b01ae90 items=0 ppid=2083 pid=2168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.494000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 2 06:54:39.497056 systemd-networkd[1514]: docker0: Link UP Jul 2 06:54:39.512000 audit[2172]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.512000 audit[2172]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc1ad886e0 a2=0 a3=7fd9139a6e90 items=0 ppid=2083 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.512000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:54:39.514000 audit[2173]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:54:39.514000 audit[2173]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffce17c8410 a2=0 a3=7f977e9d4e90 items=0 ppid=2083 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.514000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:54:39.517056 dockerd[2083]: time="2024-07-02T06:54:39.517011641Z" level=info msg="Loading containers: done." Jul 2 06:54:39.721536 dockerd[2083]: time="2024-07-02T06:54:39.721458412Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 06:54:39.721779 dockerd[2083]: time="2024-07-02T06:54:39.721754152Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 06:54:39.721921 dockerd[2083]: time="2024-07-02T06:54:39.721895952Z" level=info msg="Daemon has completed initialization" Jul 2 06:54:39.759888 dockerd[2083]: time="2024-07-02T06:54:39.759813146Z" level=info msg="API listen on /run/docker.sock" Jul 2 06:54:39.762156 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 06:54:39.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:40.598477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 06:54:40.598770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:54:40.598839 systemd[1]: kubelet.service: Consumed 1.179s CPU time. Jul 2 06:54:40.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:40.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:40.610430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:54:41.021643 containerd[1789]: time="2024-07-02T06:54:41.021532788Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 06:54:41.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:41.634675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:54:41.742973 kubelet[2220]: E0702 06:54:41.742916 2220 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:54:41.746870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:54:41.747051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:54:41.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:54:41.905142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527885816.mount: Deactivated successfully. Jul 2 06:54:45.013423 containerd[1789]: time="2024-07-02T06:54:45.013362858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:45.015170 containerd[1789]: time="2024-07-02T06:54:45.015110236Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jul 2 06:54:45.017648 containerd[1789]: time="2024-07-02T06:54:45.017599660Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:45.021248 containerd[1789]: time="2024-07-02T06:54:45.021199936Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:45.023680 containerd[1789]: time="2024-07-02T06:54:45.023588078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:45.030444 containerd[1789]: time="2024-07-02T06:54:45.030387838Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 4.008300462s" Jul 2 06:54:45.030693 containerd[1789]: time="2024-07-02T06:54:45.030664790Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 06:54:45.061066 containerd[1789]: time="2024-07-02T06:54:45.061030576Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 06:54:48.636716 containerd[1789]: time="2024-07-02T06:54:48.636642402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:48.638647 containerd[1789]: time="2024-07-02T06:54:48.638585435Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jul 2 06:54:48.641192 containerd[1789]: time="2024-07-02T06:54:48.641153615Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:48.644280 containerd[1789]: time="2024-07-02T06:54:48.644247575Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:48.647613 containerd[1789]: time="2024-07-02T06:54:48.647574531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:48.648875 containerd[1789]: time="2024-07-02T06:54:48.648834192Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 3.5875728s" Jul 2 06:54:48.649012 containerd[1789]: time="2024-07-02T06:54:48.648988834Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 06:54:48.680038 containerd[1789]: time="2024-07-02T06:54:48.679989404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 06:54:50.845877 containerd[1789]: time="2024-07-02T06:54:50.845828263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:50.848038 containerd[1789]: time="2024-07-02T06:54:50.847968177Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jul 2 06:54:50.849956 containerd[1789]: time="2024-07-02T06:54:50.849905499Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:50.857179 containerd[1789]: time="2024-07-02T06:54:50.857114303Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:50.862970 containerd[1789]: time="2024-07-02T06:54:50.862912990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:50.864372 containerd[1789]: time="2024-07-02T06:54:50.864325217Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 2.184287928s" Jul 2 06:54:50.864564 containerd[1789]: time="2024-07-02T06:54:50.864537024Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 06:54:50.893672 containerd[1789]: time="2024-07-02T06:54:50.893635892Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 06:54:51.848420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 06:54:51.848718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:54:51.858400 kernel: kauditd_printk_skb: 88 callbacks suppressed Jul 2 06:54:51.858562 kernel: audit: type=1130 audit(1719903291.847:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:51.858605 kernel: audit: type=1131 audit(1719903291.847:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:51.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:51.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:51.863201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:54:52.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:52.364279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:54:52.367558 kernel: audit: type=1130 audit(1719903292.363:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:52.502384 kubelet[2310]: E0702 06:54:52.502262 2310 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:54:52.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:54:52.505755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:54:52.506025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:54:52.510512 kernel: audit: type=1131 audit(1719903292.504:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:54:52.955053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461996331.mount: Deactivated successfully. Jul 2 06:54:53.743971 containerd[1789]: time="2024-07-02T06:54:53.743915731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:53.745402 containerd[1789]: time="2024-07-02T06:54:53.745345458Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jul 2 06:54:53.746778 containerd[1789]: time="2024-07-02T06:54:53.746745716Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:53.749470 containerd[1789]: time="2024-07-02T06:54:53.749438115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:53.752805 containerd[1789]: time="2024-07-02T06:54:53.752772309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:53.754161 containerd[1789]: time="2024-07-02T06:54:53.754117843Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 2.860018701s" Jul 2 06:54:53.754270 containerd[1789]: time="2024-07-02T06:54:53.754169392Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 06:54:53.790363 containerd[1789]: time="2024-07-02T06:54:53.790331978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 06:54:54.343027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571425748.mount: Deactivated successfully. Jul 2 06:54:55.745132 containerd[1789]: time="2024-07-02T06:54:55.745076086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:55.746514 containerd[1789]: time="2024-07-02T06:54:55.746447003Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 06:54:55.748187 containerd[1789]: time="2024-07-02T06:54:55.748148683Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:55.751707 containerd[1789]: time="2024-07-02T06:54:55.751667797Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:55.760655 containerd[1789]: time="2024-07-02T06:54:55.760581095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:55.762271 containerd[1789]: time="2024-07-02T06:54:55.762167225Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.971592717s" Jul 2 06:54:55.762399 containerd[1789]: time="2024-07-02T06:54:55.762277072Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 06:54:55.796757 containerd[1789]: time="2024-07-02T06:54:55.796713679Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 06:54:56.404668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666797633.mount: Deactivated successfully. Jul 2 06:54:56.424744 containerd[1789]: time="2024-07-02T06:54:56.424691334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:56.425936 containerd[1789]: time="2024-07-02T06:54:56.425875774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 06:54:56.427796 containerd[1789]: time="2024-07-02T06:54:56.427756523Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:56.432260 containerd[1789]: time="2024-07-02T06:54:56.432218093Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:56.447503 containerd[1789]: time="2024-07-02T06:54:56.447439285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:56.448458 containerd[1789]: time="2024-07-02T06:54:56.448406047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 651.643948ms" Jul 2 06:54:56.448628 containerd[1789]: time="2024-07-02T06:54:56.448463326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 06:54:56.479909 containerd[1789]: time="2024-07-02T06:54:56.479873219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 06:54:57.041838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983654158.mount: Deactivated successfully. Jul 2 06:54:57.514242 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 06:54:57.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:57.519521 kernel: audit: type=1131 audit(1719903297.513:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:57.543246 kernel: audit: type=1334 audit(1719903297.539:245): prog-id=40 op=UNLOAD Jul 2 06:54:57.543387 kernel: audit: type=1334 audit(1719903297.539:246): prog-id=39 op=UNLOAD Jul 2 06:54:57.543439 kernel: audit: type=1334 audit(1719903297.539:247): prog-id=38 op=UNLOAD Jul 2 06:54:57.539000 audit: BPF prog-id=40 op=UNLOAD Jul 2 06:54:57.539000 audit: BPF prog-id=39 op=UNLOAD Jul 2 06:54:57.539000 audit: BPF prog-id=38 op=UNLOAD Jul 2 06:55:02.598474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 06:55:02.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:02.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:02.599004 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:02.611935 kernel: audit: type=1130 audit(1719903302.598:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:02.612044 kernel: audit: type=1131 audit(1719903302.598:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:02.613965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:02.618280 containerd[1789]: time="2024-07-02T06:55:02.618213353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:02.653665 containerd[1789]: time="2024-07-02T06:55:02.653179081Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jul 2 06:55:02.663085 containerd[1789]: time="2024-07-02T06:55:02.663024223Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:02.703873 containerd[1789]: time="2024-07-02T06:55:02.703824006Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:02.734649 containerd[1789]: time="2024-07-02T06:55:02.734597078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:02.737729 containerd[1789]: time="2024-07-02T06:55:02.737669101Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 6.257579777s" Jul 2 06:55:02.739201 containerd[1789]: time="2024-07-02T06:55:02.737735082Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 06:55:03.143936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:03.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:03.148527 kernel: audit: type=1130 audit(1719903303.143:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:03.309090 kubelet[2450]: E0702 06:55:03.308899 2450 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:55:03.312971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:55:03.313151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:55:03.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:55:03.316545 kernel: audit: type=1131 audit(1719903303.312:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:55:06.356969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:06.362034 kernel: audit: type=1130 audit(1719903306.355:252): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:06.362134 kernel: audit: type=1131 audit(1719903306.355:253): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:06.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:06.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:06.364475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:06.393358 systemd[1]: Reloading. Jul 2 06:55:06.747196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:55:06.860385 kernel: audit: type=1334 audit(1719903306.853:254): prog-id=41 op=LOAD Jul 2 06:55:06.860551 kernel: audit: type=1334 audit(1719903306.853:255): prog-id=33 op=UNLOAD Jul 2 06:55:06.860589 kernel: audit: type=1334 audit(1719903306.855:256): prog-id=42 op=LOAD Jul 2 06:55:06.860680 kernel: audit: type=1334 audit(1719903306.855:257): prog-id=24 op=UNLOAD Jul 2 06:55:06.853000 audit: BPF prog-id=41 op=LOAD Jul 2 06:55:06.853000 audit: BPF prog-id=33 op=UNLOAD Jul 2 06:55:06.855000 audit: BPF prog-id=42 op=LOAD Jul 2 06:55:06.855000 audit: BPF prog-id=24 op=UNLOAD Jul 2 06:55:06.855000 audit: BPF prog-id=43 op=LOAD Jul 2 06:55:06.855000 audit: BPF prog-id=44 op=LOAD Jul 2 06:55:06.855000 audit: BPF prog-id=25 op=UNLOAD Jul 2 06:55:06.855000 audit: BPF prog-id=26 op=UNLOAD Jul 2 06:55:06.858000 audit: BPF prog-id=45 op=LOAD Jul 2 06:55:06.858000 audit: BPF prog-id=34 op=UNLOAD Jul 2 06:55:06.861000 audit: BPF prog-id=46 op=LOAD Jul 2 06:55:06.861000 audit: BPF prog-id=27 op=UNLOAD Jul 2 06:55:06.862000 audit: BPF prog-id=47 op=LOAD Jul 2 06:55:06.862000 audit: BPF prog-id=28 op=UNLOAD Jul 2 06:55:06.862000 audit: BPF prog-id=48 op=LOAD Jul 2 06:55:06.862000 audit: BPF prog-id=49 op=LOAD Jul 2 06:55:06.862000 audit: BPF prog-id=29 op=UNLOAD Jul 2 06:55:06.862000 audit: BPF prog-id=30 op=UNLOAD Jul 2 06:55:06.863000 audit: BPF prog-id=50 op=LOAD Jul 2 06:55:06.863000 audit: BPF prog-id=35 op=UNLOAD Jul 2 06:55:06.863000 audit: BPF prog-id=51 op=LOAD Jul 2 06:55:06.863000 audit: BPF prog-id=52 op=LOAD Jul 2 06:55:06.863000 audit: BPF prog-id=36 op=UNLOAD Jul 2 06:55:06.863000 audit: BPF prog-id=37 op=UNLOAD Jul 2 06:55:06.865000 audit: BPF prog-id=53 op=LOAD Jul 2 06:55:06.865000 audit: BPF prog-id=54 op=LOAD Jul 2 06:55:06.865000 audit: BPF prog-id=31 op=UNLOAD Jul 2 06:55:06.865000 audit: BPF prog-id=32 op=UNLOAD Jul 2 06:55:06.921480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:06.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:06.929564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:06.930333 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 06:55:06.930667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:06.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:06.935397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:07.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:07.222795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:07.308656 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:55:07.309075 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:55:07.309144 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:55:07.321365 kubelet[2580]: I0702 06:55:07.320989 2580 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:55:07.833783 kubelet[2580]: I0702 06:55:07.833740 2580 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 06:55:07.833783 kubelet[2580]: I0702 06:55:07.833771 2580 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:55:07.834090 kubelet[2580]: I0702 06:55:07.834055 2580 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 06:55:07.878716 kubelet[2580]: I0702 06:55:07.878643 2580 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:55:07.881217 kubelet[2580]: E0702 06:55:07.881193 2580 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:07.894783 kubelet[2580]: I0702 06:55:07.894750 2580 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:55:07.898050 kubelet[2580]: I0702 06:55:07.897987 2580 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:55:07.898267 kubelet[2580]: I0702 06:55:07.898042 2580 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:55:07.898414 kubelet[2580]: I0702 06:55:07.898284 2580 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:55:07.898414 kubelet[2580]: I0702 06:55:07.898300 2580 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:55:07.898549 kubelet[2580]: I0702 06:55:07.898465 2580 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:55:07.900276 kubelet[2580]: W0702 06:55:07.900209 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-4&limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:07.901786 kubelet[2580]: I0702 06:55:07.901759 2580 kubelet.go:400] "Attempting to sync node with API server" Jul 2 06:55:07.901891 kubelet[2580]: I0702 06:55:07.901790 2580 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:55:07.901891 kubelet[2580]: I0702 06:55:07.901823 2580 kubelet.go:312] "Adding apiserver pod source" Jul 2 06:55:07.901891 kubelet[2580]: I0702 06:55:07.901841 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:55:07.902028 kubelet[2580]: E0702 06:55:07.902017 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-4&limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:07.913696 kubelet[2580]: I0702 06:55:07.913664 2580 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 06:55:07.916357 kubelet[2580]: I0702 06:55:07.916326 2580 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 06:55:07.916611 kubelet[2580]: W0702 06:55:07.916598 2580 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 06:55:07.918989 kubelet[2580]: I0702 06:55:07.918954 2580 server.go:1264] "Started kubelet" Jul 2 06:55:07.919825 kubelet[2580]: W0702 06:55:07.919129 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:07.919825 kubelet[2580]: E0702 06:55:07.919218 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:07.920235 kubelet[2580]: I0702 06:55:07.920204 2580 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:55:07.922907 kubelet[2580]: I0702 06:55:07.922888 2580 server.go:455] "Adding debug handlers to kubelet server" Jul 2 06:55:07.928897 kubelet[2580]: I0702 06:55:07.928875 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:55:07.933455 kubelet[2580]: I0702 06:55:07.933376 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 06:55:07.933705 kubelet[2580]: I0702 06:55:07.933684 2580 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:55:07.932000 audit[2590]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:07.934775 kernel: kauditd_printk_skb: 27 callbacks suppressed Jul 2 06:55:07.934829 kernel: audit: type=1325 audit(1719903307.932:285): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:07.935747 kubelet[2580]: E0702 06:55:07.935616 2580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.4:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-4.17de52f23486e15f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-4,UID:ip-172-31-18-4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-4,},FirstTimestamp:2024-07-02 06:55:07.918926175 +0000 UTC m=+0.686408133,LastTimestamp:2024-07-02 06:55:07.918926175 +0000 UTC m=+0.686408133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-4,}" Jul 2 06:55:07.932000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdd1cd2210 a2=0 a3=7f0f2781de90 items=0 ppid=2580 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:07.939654 kernel: audit: type=1300 audit(1719903307.932:285): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdd1cd2210 a2=0 a3=7f0f2781de90 items=0 ppid=2580 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:07.939739 kernel: audit: type=1327 audit(1719903307.932:285): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 06:55:07.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 06:55:07.937000 audit[2591]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:07.943164 kernel: audit: type=1325 audit(1719903307.937:286): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:07.943244 kernel: audit: type=1300 audit(1719903307.937:286): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb1fd4d60 a2=0 a3=7f65d6939e90 items=0 ppid=2580 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:07.937000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb1fd4d60 a2=0 a3=7f65d6939e90 items=0 ppid=2580 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:07.943678 kubelet[2580]: I0702 06:55:07.943661 2580 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:55:07.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 06:55:07.948854 kubelet[2580]: I0702 06:55:07.948837 2580 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 06:55:07.949139 kubelet[2580]: I0702 06:55:07.949126 2580 reconciler.go:26] "Reconciler: start to sync state" Jul 2 06:55:07.950020 kubelet[2580]: W0702 06:55:07.949958 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:07.950175 kubelet[2580]: E0702 06:55:07.950162 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:07.951409 kernel: audit: type=1327 audit(1719903307.937:286): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 06:55:07.951505 kubelet[2580]: I0702 06:55:07.950782 2580 factory.go:221] Registration of the systemd container factory successfully Jul 2 06:55:07.951505 kubelet[2580]: I0702 06:55:07.950884 2580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 06:55:07.951707 kubelet[2580]: E0702 06:55:07.951669 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-4?timeout=10s\": dial tcp 172.31.18.4:6443: connect: connection refused" interval="200ms" Jul 2 06:55:07.953134 kubelet[2580]: I0702 06:55:07.953039 2580 factory.go:221] Registration of the containerd container factory successfully Jul 2 06:55:07.964000 audit[2594]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:07.964000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc95a572f0 a2=0 a3=7f1886921e90 items=0 ppid=2580 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:07.971334 kernel: audit: type=1325 audit(1719903307.964:287): table=filter:28 family=2 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:07.971448 kernel: audit: type=1300 audit(1719903307.964:287): arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc95a572f0 a2=0 a3=7f1886921e90 items=0 ppid=2580 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:07.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:55:07.974540 kernel: audit: type=1327 audit(1719903307.964:287): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:55:07.977099 kubelet[2580]: I0702 06:55:07.977085 2580 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:55:07.977204 kubelet[2580]: I0702 06:55:07.977195 2580 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:55:07.977318 kubelet[2580]: I0702 06:55:07.977311 2580 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:55:07.977000 audit[2598]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:07.977000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd3bbc0eb0 a2=0 a3=7f4187db2e90 items=0 ppid=2580 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:07.977000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:55:07.980508 kernel: audit: type=1325 audit(1719903307.977:288): table=filter:29 family=2 entries=2 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:07.981099 kubelet[2580]: I0702 06:55:07.981082 2580 policy_none.go:49] "None policy: Start" Jul 2 06:55:07.982045 kubelet[2580]: I0702 06:55:07.982032 2580 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 06:55:07.982139 kubelet[2580]: I0702 06:55:07.982131 2580 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:55:07.989362 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 06:55:08.000313 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 06:55:08.010586 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 06:55:08.013000 audit[2602]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:08.013000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd3770aa90 a2=0 a3=7fdca17a1e90 items=0 ppid=2580 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:08.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 2 06:55:08.015385 kubelet[2580]: I0702 06:55:08.015246 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:55:08.015000 audit[2604]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2604 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:08.015000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff19401da0 a2=0 a3=7fe5def7be90 items=0 ppid=2580 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:08.015000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 06:55:08.017944 kubelet[2580]: I0702 06:55:08.017879 2580 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:55:08.018062 kubelet[2580]: I0702 06:55:08.018044 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:55:08.018141 kubelet[2580]: I0702 06:55:08.018131 2580 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:55:08.018207 kubelet[2580]: I0702 06:55:08.018199 2580 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 06:55:08.018335 kubelet[2580]: E0702 06:55:08.018315 2580 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:55:08.021094 kubelet[2580]: I0702 06:55:08.021028 2580 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 06:55:08.021212 kubelet[2580]: I0702 06:55:08.021197 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:55:08.022242 kubelet[2580]: W0702 06:55:08.022197 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:08.023000 audit[2606]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:08.023000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedf20c9d0 a2=0 a3=7f24d41b6e90 items=0 ppid=2580 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:08.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 06:55:08.025581 kubelet[2580]: E0702 06:55:08.025562 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:08.027099 kubelet[2580]: E0702 06:55:08.027079 2580 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-4\" not found" Jul 2 06:55:08.026000 audit[2605]: NETFILTER_CFG table=mangle:33 family=2 entries=1 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:08.026000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4490f540 a2=0 a3=7f73e8db5e90 items=0 ppid=2580 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:08.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 06:55:08.028000 audit[2607]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:08.028000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fffb4ebd830 a2=0 a3=7fe3efe78e90 items=0 ppid=2580 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:08.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 06:55:08.030000 audit[2608]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:08.030000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5234a890 a2=0 a3=7f282ce01e90 items=0 ppid=2580 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:08.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 06:55:08.031000 audit[2609]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:08.031000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd0585e250 a2=0 a3=7f42286f0e90 items=0 ppid=2580 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:08.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 06:55:08.032000 audit[2610]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:08.032000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff37756290 a2=0 a3=7fa041b49e90 items=0 ppid=2580 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:08.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 06:55:08.049367 kubelet[2580]: I0702 06:55:08.049334 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-4" Jul 2 06:55:08.049808 kubelet[2580]: E0702 06:55:08.049779 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.4:6443/api/v1/nodes\": dial tcp 172.31.18.4:6443: connect: connection refused" node="ip-172-31-18-4" Jul 2 06:55:08.119374 kubelet[2580]: I0702 06:55:08.119309 2580 topology_manager.go:215] "Topology Admit Handler" podUID="a9ad319acb4f860a5889567e21ce7c1e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-4" Jul 2 06:55:08.121932 kubelet[2580]: I0702 06:55:08.121898 2580 topology_manager.go:215] "Topology Admit Handler" podUID="30742a74a3610ef683c4d0b0dd078dcb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:08.123984 kubelet[2580]: I0702 06:55:08.123847 2580 topology_manager.go:215] "Topology Admit Handler" podUID="05d94f2ce6a92591042367bbd7f705e4" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-4" Jul 2 06:55:08.131580 systemd[1]: Created slice kubepods-burstable-poda9ad319acb4f860a5889567e21ce7c1e.slice - libcontainer container kubepods-burstable-poda9ad319acb4f860a5889567e21ce7c1e.slice. Jul 2 06:55:08.152693 systemd[1]: Created slice kubepods-burstable-pod30742a74a3610ef683c4d0b0dd078dcb.slice - libcontainer container kubepods-burstable-pod30742a74a3610ef683c4d0b0dd078dcb.slice. Jul 2 06:55:08.153749 kubelet[2580]: E0702 06:55:08.153623 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-4?timeout=10s\": dial tcp 172.31.18.4:6443: connect: connection refused" interval="400ms" Jul 2 06:55:08.158239 systemd[1]: Created slice kubepods-burstable-pod05d94f2ce6a92591042367bbd7f705e4.slice - libcontainer container kubepods-burstable-pod05d94f2ce6a92591042367bbd7f705e4.slice. Jul 2 06:55:08.252605 kubelet[2580]: I0702 06:55:08.252572 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:08.252820 kubelet[2580]: I0702 06:55:08.252793 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:08.252900 kubelet[2580]: I0702 06:55:08.252830 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:08.252900 kubelet[2580]: I0702 06:55:08.252858 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:08.252900 kubelet[2580]: I0702 06:55:08.252883 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9ad319acb4f860a5889567e21ce7c1e-ca-certs\") pod \"kube-apiserver-ip-172-31-18-4\" (UID: \"a9ad319acb4f860a5889567e21ce7c1e\") " pod="kube-system/kube-apiserver-ip-172-31-18-4" Jul 2 06:55:08.253032 kubelet[2580]: I0702 06:55:08.252905 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9ad319acb4f860a5889567e21ce7c1e-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-4\" (UID: \"a9ad319acb4f860a5889567e21ce7c1e\") " pod="kube-system/kube-apiserver-ip-172-31-18-4" Jul 2 06:55:08.253032 kubelet[2580]: I0702 06:55:08.252930 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9ad319acb4f860a5889567e21ce7c1e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-4\" (UID: \"a9ad319acb4f860a5889567e21ce7c1e\") " pod="kube-system/kube-apiserver-ip-172-31-18-4" Jul 2 06:55:08.253032 kubelet[2580]: I0702 06:55:08.252954 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:08.253032 kubelet[2580]: I0702 06:55:08.252980 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05d94f2ce6a92591042367bbd7f705e4-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-4\" (UID: \"05d94f2ce6a92591042367bbd7f705e4\") " pod="kube-system/kube-scheduler-ip-172-31-18-4" Jul 2 06:55:08.254057 kubelet[2580]: I0702 06:55:08.254033 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-4" Jul 2 06:55:08.254368 kubelet[2580]: E0702 06:55:08.254330 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.4:6443/api/v1/nodes\": dial tcp 172.31.18.4:6443: connect: connection refused" node="ip-172-31-18-4" Jul 2 06:55:08.451577 containerd[1789]: time="2024-07-02T06:55:08.451151050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-4,Uid:a9ad319acb4f860a5889567e21ce7c1e,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:08.465957 containerd[1789]: time="2024-07-02T06:55:08.465897928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-4,Uid:30742a74a3610ef683c4d0b0dd078dcb,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:08.466555 containerd[1789]: time="2024-07-02T06:55:08.466513854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-4,Uid:05d94f2ce6a92591042367bbd7f705e4,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:08.555070 kubelet[2580]: E0702 06:55:08.555018 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-4?timeout=10s\": dial tcp 172.31.18.4:6443: connect: connection refused" interval="800ms" Jul 2 06:55:08.656380 kubelet[2580]: I0702 06:55:08.656344 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-4" Jul 2 06:55:08.656821 kubelet[2580]: E0702 06:55:08.656787 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.4:6443/api/v1/nodes\": dial tcp 172.31.18.4:6443: connect: connection refused" node="ip-172-31-18-4" Jul 2 06:55:08.835238 kubelet[2580]: W0702 06:55:08.835057 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:08.835238 kubelet[2580]: E0702 06:55:08.835109 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:09.010857 kubelet[2580]: W0702 06:55:09.010791 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-4&limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:09.010857 kubelet[2580]: E0702 06:55:09.010862 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-4&limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:09.039520 kubelet[2580]: W0702 06:55:09.038143 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:09.039520 kubelet[2580]: E0702 06:55:09.038230 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:09.053660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294854072.mount: Deactivated successfully. Jul 2 06:55:09.070293 containerd[1789]: time="2024-07-02T06:55:09.070238395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.072667 containerd[1789]: time="2024-07-02T06:55:09.072551873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 06:55:09.074374 containerd[1789]: time="2024-07-02T06:55:09.074333963Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.083263 containerd[1789]: time="2024-07-02T06:55:09.082769519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:55:09.085571 containerd[1789]: time="2024-07-02T06:55:09.085406382Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.091845 containerd[1789]: time="2024-07-02T06:55:09.091765015Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.096701 containerd[1789]: time="2024-07-02T06:55:09.096626628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:55:09.096953 containerd[1789]: time="2024-07-02T06:55:09.096924880Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.110338 containerd[1789]: time="2024-07-02T06:55:09.110287218Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.124035 containerd[1789]: time="2024-07-02T06:55:09.123982033Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.124824 containerd[1789]: time="2024-07-02T06:55:09.124778277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.511736ms" Jul 2 06:55:09.137089 containerd[1789]: time="2024-07-02T06:55:09.136739167Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 670.71166ms" Jul 2 06:55:09.138379 containerd[1789]: time="2024-07-02T06:55:09.138340021Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.139737 containerd[1789]: time="2024-07-02T06:55:09.139690294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.008861ms" Jul 2 06:55:09.142968 containerd[1789]: time="2024-07-02T06:55:09.142935557Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.145529 containerd[1789]: time="2024-07-02T06:55:09.145477385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.148512 containerd[1789]: time="2024-07-02T06:55:09.148468418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.152015 containerd[1789]: time="2024-07-02T06:55:09.151817961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:09.367528 kubelet[2580]: E0702 06:55:09.356428 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-4?timeout=10s\": dial tcp 172.31.18.4:6443: connect: connection refused" interval="1.6s" Jul 2 06:55:09.460233 kubelet[2580]: W0702 06:55:09.460131 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:09.460233 kubelet[2580]: E0702 06:55:09.460233 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:09.463695 kubelet[2580]: I0702 06:55:09.463662 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-4" Jul 2 06:55:09.464590 kubelet[2580]: E0702 06:55:09.464450 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.4:6443/api/v1/nodes\": dial tcp 172.31.18.4:6443: connect: connection refused" node="ip-172-31-18-4" Jul 2 06:55:09.541762 containerd[1789]: time="2024-07-02T06:55:09.541660593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:09.541762 containerd[1789]: time="2024-07-02T06:55:09.541726745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:09.542616 containerd[1789]: time="2024-07-02T06:55:09.541748271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:09.542616 containerd[1789]: time="2024-07-02T06:55:09.542549136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:09.545763 containerd[1789]: time="2024-07-02T06:55:09.545687567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:09.545955 containerd[1789]: time="2024-07-02T06:55:09.545789906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:09.545955 containerd[1789]: time="2024-07-02T06:55:09.545831073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:09.546068 containerd[1789]: time="2024-07-02T06:55:09.545959925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:09.579509 containerd[1789]: time="2024-07-02T06:55:09.579334042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:09.579978 containerd[1789]: time="2024-07-02T06:55:09.579634852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:09.579978 containerd[1789]: time="2024-07-02T06:55:09.579946340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:09.580282 containerd[1789]: time="2024-07-02T06:55:09.580198054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:09.597183 systemd[1]: Started cri-containerd-a07682c5838ea5b98fb03e7993ab4ac4a50e583d428fa99237187cce4e53f391.scope - libcontainer container a07682c5838ea5b98fb03e7993ab4ac4a50e583d428fa99237187cce4e53f391. Jul 2 06:55:09.643966 systemd[1]: Started cri-containerd-1f4ec3028cf392df205d230fb083e3e947fc2b8fce3724b5cea7909740701ed9.scope - libcontainer container 1f4ec3028cf392df205d230fb083e3e947fc2b8fce3724b5cea7909740701ed9. Jul 2 06:55:09.668760 systemd[1]: Started cri-containerd-29e9e2d9032c21edf8e7c461ef4f8b5ac6695d0def3ea5385b7f3242f6ba5b80.scope - libcontainer container 29e9e2d9032c21edf8e7c461ef4f8b5ac6695d0def3ea5385b7f3242f6ba5b80. Jul 2 06:55:09.678000 audit: BPF prog-id=55 op=LOAD Jul 2 06:55:09.683000 audit: BPF prog-id=56 op=LOAD Jul 2 06:55:09.683000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013b988 a2=78 a3=0 items=0 ppid=2637 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:09.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373638326335383338656135623938666230336537393933616234 Jul 2 06:55:09.684000 audit: BPF prog-id=57 op=LOAD Jul 2 06:55:09.684000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00013b720 a2=78 a3=0 items=0 ppid=2637 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:09.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373638326335383338656135623938666230336537393933616234 Jul 2 06:55:09.684000 audit: BPF prog-id=57 op=UNLOAD Jul 2 06:55:09.684000 audit: BPF prog-id=56 op=UNLOAD Jul 2 06:55:09.684000 audit: BPF prog-id=58 op=LOAD Jul 2 06:55:09.684000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013bbe0 a2=78 a3=0 items=0 ppid=2637 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:09.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373638326335383338656135623938666230336537393933616234 Jul 2 06:55:09.710000 audit: BPF prog-id=59 op=LOAD Jul 2 06:55:09.711000 audit: BPF prog-id=60 op=LOAD Jul 2 06:55:09.711000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2638 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:09.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346563333032386366333932646632303564323330666230383365 Jul 2 06:55:09.711000 audit: BPF prog-id=61 op=LOAD Jul 2 06:55:09.711000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2638 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:09.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346563333032386366333932646632303564323330666230383365 Jul 2 06:55:09.711000 audit: BPF prog-id=61 op=UNLOAD Jul 2 06:55:09.711000 audit: BPF prog-id=60 op=UNLOAD Jul 2 06:55:09.711000 audit: BPF prog-id=62 op=LOAD Jul 2 06:55:09.711000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2638 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:09.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346563333032386366333932646632303564323330666230383365 Jul 2 06:55:09.727000 audit: BPF prog-id=63 op=LOAD Jul 2 06:55:09.728000 audit: BPF prog-id=64 op=LOAD Jul 2 06:55:09.728000 audit[2682]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2649 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:09.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239653965326439303332633231656466386537633436316566346638 Jul 2 06:55:09.729000 audit: BPF prog-id=65 op=LOAD Jul 2 06:55:09.729000 audit[2682]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2649 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:09.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239653965326439303332633231656466386537633436316566346638 Jul 2 06:55:09.729000 audit: BPF prog-id=65 op=UNLOAD Jul 2 06:55:09.729000 audit: BPF prog-id=64 op=UNLOAD Jul 2 06:55:09.729000 audit: BPF prog-id=66 op=LOAD Jul 2 06:55:09.729000 audit[2682]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2649 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:09.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239653965326439303332633231656466386537633436316566346638 Jul 2 06:55:09.817188 containerd[1789]: time="2024-07-02T06:55:09.817136176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-4,Uid:05d94f2ce6a92591042367bbd7f705e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a07682c5838ea5b98fb03e7993ab4ac4a50e583d428fa99237187cce4e53f391\"" Jul 2 06:55:09.824857 containerd[1789]: time="2024-07-02T06:55:09.824815632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-4,Uid:a9ad319acb4f860a5889567e21ce7c1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f4ec3028cf392df205d230fb083e3e947fc2b8fce3724b5cea7909740701ed9\"" Jul 2 06:55:09.830677 containerd[1789]: time="2024-07-02T06:55:09.830625868Z" level=info msg="CreateContainer within sandbox \"a07682c5838ea5b98fb03e7993ab4ac4a50e583d428fa99237187cce4e53f391\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 06:55:09.833683 containerd[1789]: time="2024-07-02T06:55:09.833647033Z" level=info msg="CreateContainer within sandbox \"1f4ec3028cf392df205d230fb083e3e947fc2b8fce3724b5cea7909740701ed9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 06:55:09.848437 containerd[1789]: time="2024-07-02T06:55:09.848374767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-4,Uid:30742a74a3610ef683c4d0b0dd078dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"29e9e2d9032c21edf8e7c461ef4f8b5ac6695d0def3ea5385b7f3242f6ba5b80\"" Jul 2 06:55:09.883758 containerd[1789]: time="2024-07-02T06:55:09.883626533Z" level=info msg="CreateContainer within sandbox \"29e9e2d9032c21edf8e7c461ef4f8b5ac6695d0def3ea5385b7f3242f6ba5b80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 06:55:09.915422 containerd[1789]: time="2024-07-02T06:55:09.915297545Z" level=info msg="CreateContainer within sandbox \"a07682c5838ea5b98fb03e7993ab4ac4a50e583d428fa99237187cce4e53f391\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c\"" Jul 2 06:55:09.921665 containerd[1789]: time="2024-07-02T06:55:09.921615328Z" level=info msg="CreateContainer within sandbox \"1f4ec3028cf392df205d230fb083e3e947fc2b8fce3724b5cea7909740701ed9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd5623a4d2e6be0638a9eb36af7a9f043e1a89822cfd43b5f2a656ca1ac1cd4b\"" Jul 2 06:55:09.922014 containerd[1789]: time="2024-07-02T06:55:09.921981493Z" level=info msg="StartContainer for \"2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c\"" Jul 2 06:55:09.923476 containerd[1789]: time="2024-07-02T06:55:09.923442927Z" level=info msg="CreateContainer within sandbox \"29e9e2d9032c21edf8e7c461ef4f8b5ac6695d0def3ea5385b7f3242f6ba5b80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d\"" Jul 2 06:55:09.924361 containerd[1789]: time="2024-07-02T06:55:09.924334994Z" level=info msg="StartContainer for \"dd5623a4d2e6be0638a9eb36af7a9f043e1a89822cfd43b5f2a656ca1ac1cd4b\"" Jul 2 06:55:09.941216 containerd[1789]: time="2024-07-02T06:55:09.941043155Z" level=info msg="StartContainer for \"61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d\"" Jul 2 06:55:09.998732 systemd[1]: Started cri-containerd-dd5623a4d2e6be0638a9eb36af7a9f043e1a89822cfd43b5f2a656ca1ac1cd4b.scope - libcontainer container dd5623a4d2e6be0638a9eb36af7a9f043e1a89822cfd43b5f2a656ca1ac1cd4b. Jul 2 06:55:10.029235 kubelet[2580]: E0702 06:55:10.028879 2580 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:10.071807 systemd[1]: Started cri-containerd-2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c.scope - libcontainer container 2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c. Jul 2 06:55:10.074223 systemd[1]: run-containerd-runc-k8s.io-2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c-runc.PjE8dE.mount: Deactivated successfully. Jul 2 06:55:10.085000 audit: BPF prog-id=67 op=LOAD Jul 2 06:55:10.086000 audit: BPF prog-id=68 op=LOAD Jul 2 06:55:10.086000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2638 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:10.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353632336134643265366265303633386139656233366166376139 Jul 2 06:55:10.086000 audit: BPF prog-id=69 op=LOAD Jul 2 06:55:10.086000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2638 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:10.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353632336134643265366265303633386139656233366166376139 Jul 2 06:55:10.086000 audit: BPF prog-id=69 op=UNLOAD Jul 2 06:55:10.086000 audit: BPF prog-id=68 op=UNLOAD Jul 2 06:55:10.086000 audit: BPF prog-id=70 op=LOAD Jul 2 06:55:10.086000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2638 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:10.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353632336134643265366265303633386139656233366166376139 Jul 2 06:55:10.105324 systemd[1]: run-containerd-runc-k8s.io-61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d-runc.fsnIpI.mount: Deactivated successfully. Jul 2 06:55:10.108000 audit: BPF prog-id=71 op=LOAD Jul 2 06:55:10.108000 audit: BPF prog-id=72 op=LOAD Jul 2 06:55:10.108000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2637 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:10.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266353138373534343664613435376462356363653437313538363533 Jul 2 06:55:10.109000 audit: BPF prog-id=73 op=LOAD Jul 2 06:55:10.109000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2637 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:10.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266353138373534343664613435376462356363653437313538363533 Jul 2 06:55:10.109000 audit: BPF prog-id=73 op=UNLOAD Jul 2 06:55:10.109000 audit: BPF prog-id=72 op=UNLOAD Jul 2 06:55:10.109000 audit: BPF prog-id=74 op=LOAD Jul 2 06:55:10.109000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2637 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:10.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266353138373534343664613435376462356363653437313538363533 Jul 2 06:55:10.118521 systemd[1]: Started cri-containerd-61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d.scope - libcontainer container 61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d. Jul 2 06:55:10.167000 audit: BPF prog-id=75 op=LOAD Jul 2 06:55:10.169000 audit: BPF prog-id=76 op=LOAD Jul 2 06:55:10.169000 audit[2775]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2649 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:10.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623533633664323931653865303336646336343865333636643861 Jul 2 06:55:10.170000 audit: BPF prog-id=77 op=LOAD Jul 2 06:55:10.170000 audit[2775]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2649 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:10.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623533633664323931653865303336646336343865333636643861 Jul 2 06:55:10.170000 audit: BPF prog-id=77 op=UNLOAD Jul 2 06:55:10.171000 audit: BPF prog-id=76 op=UNLOAD Jul 2 06:55:10.171000 audit: BPF prog-id=78 op=LOAD Jul 2 06:55:10.171000 audit[2775]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2649 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:10.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623533633664323931653865303336646336343865333636643861 Jul 2 06:55:10.199395 containerd[1789]: time="2024-07-02T06:55:10.199335726Z" level=info msg="StartContainer for \"dd5623a4d2e6be0638a9eb36af7a9f043e1a89822cfd43b5f2a656ca1ac1cd4b\" returns successfully" Jul 2 06:55:10.206584 containerd[1789]: time="2024-07-02T06:55:10.206534713Z" level=info msg="StartContainer for \"2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c\" returns successfully" Jul 2 06:55:10.249028 containerd[1789]: time="2024-07-02T06:55:10.248972604Z" level=info msg="StartContainer for \"61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d\" returns successfully" Jul 2 06:55:10.896968 kubelet[2580]: W0702 06:55:10.895916 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:10.896968 kubelet[2580]: E0702 06:55:10.895996 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:10.957635 kubelet[2580]: E0702 06:55:10.957571 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-4?timeout=10s\": dial tcp 172.31.18.4:6443: connect: connection refused" interval="3.2s" Jul 2 06:55:11.062137 kubelet[2580]: W0702 06:55:11.062012 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:11.062137 kubelet[2580]: E0702 06:55:11.062088 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:11.066780 kubelet[2580]: I0702 06:55:11.066757 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-4" Jul 2 06:55:11.067266 kubelet[2580]: E0702 06:55:11.067241 2580 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.4:6443/api/v1/nodes\": dial tcp 172.31.18.4:6443: connect: connection refused" node="ip-172-31-18-4" Jul 2 06:55:11.223182 kubelet[2580]: E0702 06:55:11.222967 2580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.4:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-4.17de52f23486e15f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-4,UID:ip-172-31-18-4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-4,},FirstTimestamp:2024-07-02 06:55:07.918926175 +0000 UTC m=+0.686408133,LastTimestamp:2024-07-02 06:55:07.918926175 +0000 UTC m=+0.686408133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-4,}" Jul 2 06:55:11.513429 kubelet[2580]: W0702 06:55:11.513290 2580 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:11.513670 kubelet[2580]: E0702 06:55:11.513656 2580 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.4:6443: connect: connection refused Jul 2 06:55:11.688312 update_engine[1780]: I0702 06:55:11.687530 1780 update_attempter.cc:509] Updating boot flags... Jul 2 06:55:11.819692 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2867) Jul 2 06:55:12.170513 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2868) Jul 2 06:55:13.453062 kernel: kauditd_printk_skb: 98 callbacks suppressed Jul 2 06:55:13.453218 kernel: audit: type=1400 audit(1719903313.445:333): avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:13.453260 kernel: audit: type=1300 audit(1719903313.445:333): arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000855980 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:13.445000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:13.445000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000855980 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:13.445000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:13.458520 kernel: audit: type=1327 audit(1719903313.445:333): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:13.462511 kernel: audit: type=1400 audit(1719903313.459:334): avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:13.459000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:13.459000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c0000b7480 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:13.469890 kernel: audit: type=1300 audit(1719903313.459:334): arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c0000b7480 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:13.470012 kernel: audit: type=1327 audit(1719903313.459:334): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:13.459000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:14.269235 kubelet[2580]: I0702 06:55:14.269206 2580 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-4" Jul 2 06:55:14.274000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:14.274000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:14.280237 kernel: audit: type=1400 audit(1719903314.274:335): avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:14.280345 kernel: audit: type=1400 audit(1719903314.274:336): avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:14.280380 kernel: audit: type=1300 audit(1719903314.274:336): arch=c000003e syscall=254 success=no exit=-13 a0=43 a1=c00686e8a0 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:55:14.274000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=43 a1=c00686e8a0 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:55:14.274000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:55:14.287541 kernel: audit: type=1327 audit(1719903314.274:336): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:55:14.274000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c0083657d0 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:55:14.274000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:55:14.274000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:14.274000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c0083fa3e0 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:55:14.274000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:55:14.286000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7806 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:14.286000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4b a1=c008365b00 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:55:14.286000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:55:14.300000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:14.300000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=65 a1=c0083fb020 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:55:14.300000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:55:14.300000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:14.300000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=65 a1=c006dd4a80 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:55:14.300000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:55:14.497735 kubelet[2580]: E0702 06:55:14.497694 2580 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-4\" not found" node="ip-172-31-18-4" Jul 2 06:55:14.586419 kubelet[2580]: I0702 06:55:14.586251 2580 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-4" Jul 2 06:55:14.908102 kubelet[2580]: I0702 06:55:14.908068 2580 apiserver.go:52] "Watching apiserver" Jul 2 06:55:14.950068 kubelet[2580]: I0702 06:55:14.950022 2580 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 06:55:16.712832 systemd[1]: Reloading. Jul 2 06:55:17.104215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:55:17.153000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:17.153000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00105ad40 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:17.153000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:17.157000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:17.158000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:17.158000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:55:17.158000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000fc3fa0 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:17.158000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:17.158000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000fc3f60 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:17.158000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:17.157000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000fc3f20 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:17.157000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:17.260000 audit: BPF prog-id=79 op=LOAD Jul 2 06:55:17.260000 audit: BPF prog-id=41 op=UNLOAD Jul 2 06:55:17.261000 audit: BPF prog-id=80 op=LOAD Jul 2 06:55:17.261000 audit: BPF prog-id=42 op=UNLOAD Jul 2 06:55:17.261000 audit: BPF prog-id=81 op=LOAD Jul 2 06:55:17.261000 audit: BPF prog-id=82 op=LOAD Jul 2 06:55:17.261000 audit: BPF prog-id=43 op=UNLOAD Jul 2 06:55:17.261000 audit: BPF prog-id=44 op=UNLOAD Jul 2 06:55:17.263000 audit: BPF prog-id=83 op=LOAD Jul 2 06:55:17.263000 audit: BPF prog-id=63 op=UNLOAD Jul 2 06:55:17.264000 audit: BPF prog-id=84 op=LOAD Jul 2 06:55:17.264000 audit: BPF prog-id=45 op=UNLOAD Jul 2 06:55:17.265000 audit: BPF prog-id=85 op=LOAD Jul 2 06:55:17.265000 audit: BPF prog-id=55 op=UNLOAD Jul 2 06:55:17.266000 audit: BPF prog-id=86 op=LOAD Jul 2 06:55:17.267000 audit: BPF prog-id=59 op=UNLOAD Jul 2 06:55:17.267000 audit: BPF prog-id=87 op=LOAD Jul 2 06:55:17.267000 audit: BPF prog-id=71 op=UNLOAD Jul 2 06:55:17.269000 audit: BPF prog-id=88 op=LOAD Jul 2 06:55:17.269000 audit: BPF prog-id=46 op=UNLOAD Jul 2 06:55:17.269000 audit: BPF prog-id=89 op=LOAD Jul 2 06:55:17.269000 audit: BPF prog-id=47 op=UNLOAD Jul 2 06:55:17.269000 audit: BPF prog-id=90 op=LOAD Jul 2 06:55:17.269000 audit: BPF prog-id=91 op=LOAD Jul 2 06:55:17.269000 audit: BPF prog-id=48 op=UNLOAD Jul 2 06:55:17.269000 audit: BPF prog-id=49 op=UNLOAD Jul 2 06:55:17.271000 audit: BPF prog-id=92 op=LOAD Jul 2 06:55:17.271000 audit: BPF prog-id=50 op=UNLOAD Jul 2 06:55:17.271000 audit: BPF prog-id=93 op=LOAD Jul 2 06:55:17.271000 audit: BPF prog-id=94 op=LOAD Jul 2 06:55:17.271000 audit: BPF prog-id=51 op=UNLOAD Jul 2 06:55:17.271000 audit: BPF prog-id=52 op=UNLOAD Jul 2 06:55:17.272000 audit: BPF prog-id=95 op=LOAD Jul 2 06:55:17.272000 audit: BPF prog-id=75 op=UNLOAD Jul 2 06:55:17.274000 audit: BPF prog-id=96 op=LOAD Jul 2 06:55:17.274000 audit: BPF prog-id=97 op=LOAD Jul 2 06:55:17.274000 audit: BPF prog-id=53 op=UNLOAD Jul 2 06:55:17.274000 audit: BPF prog-id=54 op=UNLOAD Jul 2 06:55:17.276000 audit: BPF prog-id=98 op=LOAD Jul 2 06:55:17.276000 audit: BPF prog-id=67 op=UNLOAD Jul 2 06:55:17.296687 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:17.299422 kubelet[2580]: E0702 06:55:17.297997 2580 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ip-172-31-18-4.17de52f23486e15f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-4,UID:ip-172-31-18-4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-4,},FirstTimestamp:2024-07-02 06:55:07.918926175 +0000 UTC m=+0.686408133,LastTimestamp:2024-07-02 06:55:07.918926175 +0000 UTC m=+0.686408133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-4,}" Jul 2 06:55:17.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:17.319847 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 06:55:17.320038 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:17.328685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:17.684945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:17.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:17.799758 kubelet[3113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:55:17.800269 kubelet[3113]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:55:17.800317 kubelet[3113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:55:17.802066 kubelet[3113]: I0702 06:55:17.802020 3113 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:55:17.809910 kubelet[3113]: I0702 06:55:17.809872 3113 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 06:55:17.809910 kubelet[3113]: I0702 06:55:17.809899 3113 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:55:17.810237 kubelet[3113]: I0702 06:55:17.810215 3113 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 06:55:17.815318 kubelet[3113]: I0702 06:55:17.815286 3113 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 06:55:17.825840 kubelet[3113]: I0702 06:55:17.825812 3113 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:55:17.845217 kubelet[3113]: I0702 06:55:17.845195 3113 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:55:17.845736 kubelet[3113]: I0702 06:55:17.845700 3113 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:55:17.846286 kubelet[3113]: I0702 06:55:17.845841 3113 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:55:17.846476 kubelet[3113]: I0702 06:55:17.846464 3113 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:55:17.846560 kubelet[3113]: I0702 06:55:17.846541 3113 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:55:17.849138 kubelet[3113]: I0702 06:55:17.849115 3113 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:55:17.850806 kubelet[3113]: I0702 06:55:17.850790 3113 kubelet.go:400] "Attempting to sync node with API server" Jul 2 06:55:17.854443 kubelet[3113]: I0702 06:55:17.853545 3113 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:55:17.854443 kubelet[3113]: I0702 06:55:17.853588 3113 kubelet.go:312] "Adding apiserver pod source" Jul 2 06:55:17.854443 kubelet[3113]: I0702 06:55:17.853612 3113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:55:17.862182 kubelet[3113]: I0702 06:55:17.862155 3113 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 06:55:17.864424 kubelet[3113]: I0702 06:55:17.864399 3113 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 06:55:17.866345 kubelet[3113]: I0702 06:55:17.866325 3113 server.go:1264] "Started kubelet" Jul 2 06:55:17.868944 kubelet[3113]: I0702 06:55:17.868914 3113 apiserver.go:52] "Watching apiserver" Jul 2 06:55:17.873778 kubelet[3113]: I0702 06:55:17.873753 3113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:55:17.888149 kubelet[3113]: I0702 06:55:17.887630 3113 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:55:17.889924 kubelet[3113]: I0702 06:55:17.888891 3113 server.go:455] "Adding debug handlers to kubelet server" Jul 2 06:55:17.890134 kubelet[3113]: I0702 06:55:17.890119 3113 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:55:17.891677 kubelet[3113]: I0702 06:55:17.890220 3113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 06:55:17.898225 kubelet[3113]: I0702 06:55:17.892406 3113 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:55:17.898225 kubelet[3113]: I0702 06:55:17.890258 3113 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 06:55:17.898225 kubelet[3113]: I0702 06:55:17.892651 3113 reconciler.go:26] "Reconciler: start to sync state" Jul 2 06:55:17.903225 kubelet[3113]: I0702 06:55:17.903033 3113 factory.go:221] Registration of the systemd container factory successfully Jul 2 06:55:17.903225 kubelet[3113]: I0702 06:55:17.903209 3113 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 06:55:17.906055 kubelet[3113]: I0702 06:55:17.905562 3113 factory.go:221] Registration of the containerd container factory successfully Jul 2 06:55:17.911518 kubelet[3113]: E0702 06:55:17.911471 3113 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 06:55:17.916388 kubelet[3113]: I0702 06:55:17.915612 3113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:55:17.917292 kubelet[3113]: I0702 06:55:17.917099 3113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:55:17.917292 kubelet[3113]: I0702 06:55:17.917138 3113 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:55:17.917292 kubelet[3113]: I0702 06:55:17.917159 3113 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 06:55:17.917292 kubelet[3113]: E0702 06:55:17.917209 3113 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:55:18.021771 kubelet[3113]: E0702 06:55:18.018064 3113 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 06:55:18.022069 kubelet[3113]: I0702 06:55:18.018252 3113 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-4" Jul 2 06:55:18.038555 kubelet[3113]: I0702 06:55:18.038482 3113 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-4" Jul 2 06:55:18.042856 kubelet[3113]: I0702 06:55:18.042806 3113 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-4" Jul 2 06:55:18.071883 kubelet[3113]: I0702 06:55:18.071809 3113 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:55:18.071883 kubelet[3113]: I0702 06:55:18.071829 3113 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:55:18.071883 kubelet[3113]: I0702 06:55:18.071850 3113 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:55:18.072130 kubelet[3113]: I0702 06:55:18.072045 3113 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 06:55:18.072130 kubelet[3113]: I0702 06:55:18.072059 3113 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 06:55:18.072130 kubelet[3113]: I0702 06:55:18.072083 3113 policy_none.go:49] "None policy: Start" Jul 2 06:55:18.073152 kubelet[3113]: I0702 06:55:18.073088 3113 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 06:55:18.073152 kubelet[3113]: I0702 06:55:18.073114 3113 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:55:18.073326 kubelet[3113]: I0702 06:55:18.073285 3113 state_mem.go:75] "Updated machine memory state" Jul 2 06:55:18.093983 kubelet[3113]: I0702 06:55:18.093952 3113 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:55:18.095559 kubelet[3113]: I0702 06:55:18.094245 3113 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 06:55:18.095559 kubelet[3113]: I0702 06:55:18.094759 3113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:55:18.223117 kubelet[3113]: I0702 06:55:18.223064 3113 topology_manager.go:215] "Topology Admit Handler" podUID="a9ad319acb4f860a5889567e21ce7c1e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-4" Jul 2 06:55:18.223425 kubelet[3113]: I0702 06:55:18.223387 3113 topology_manager.go:215] "Topology Admit Handler" podUID="30742a74a3610ef683c4d0b0dd078dcb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:18.223537 kubelet[3113]: I0702 06:55:18.223476 3113 topology_manager.go:215] "Topology Admit Handler" podUID="05d94f2ce6a92591042367bbd7f705e4" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-4" Jul 2 06:55:18.275936 kubelet[3113]: I0702 06:55:18.275777 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-4" podStartSLOduration=0.275760134 podStartE2EDuration="275.760134ms" podCreationTimestamp="2024-07-02 06:55:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:55:18.269116683 +0000 UTC m=+0.559988533" watchObservedRunningTime="2024-07-02 06:55:18.275760134 +0000 UTC m=+0.566631963" Jul 2 06:55:18.276184 kubelet[3113]: I0702 06:55:18.275951 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-4" podStartSLOduration=2.275941881 podStartE2EDuration="2.275941881s" podCreationTimestamp="2024-07-02 06:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:55:18.258835038 +0000 UTC m=+0.549706882" watchObservedRunningTime="2024-07-02 06:55:18.275941881 +0000 UTC m=+0.566813728" Jul 2 06:55:18.283114 kubelet[3113]: I0702 06:55:18.280228 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-4" podStartSLOduration=0.28019945 podStartE2EDuration="280.19945ms" podCreationTimestamp="2024-07-02 06:55:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:55:18.280131263 +0000 UTC m=+0.571003114" watchObservedRunningTime="2024-07-02 06:55:18.28019945 +0000 UTC m=+0.571071299" Jul 2 06:55:18.293744 kubelet[3113]: I0702 06:55:18.293706 3113 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 06:55:18.296899 kubelet[3113]: I0702 06:55:18.296039 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:18.296899 kubelet[3113]: I0702 06:55:18.296182 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:18.296899 kubelet[3113]: I0702 06:55:18.296220 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9ad319acb4f860a5889567e21ce7c1e-ca-certs\") pod \"kube-apiserver-ip-172-31-18-4\" (UID: \"a9ad319acb4f860a5889567e21ce7c1e\") " pod="kube-system/kube-apiserver-ip-172-31-18-4" Jul 2 06:55:18.296899 kubelet[3113]: I0702 06:55:18.296273 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:18.296899 kubelet[3113]: I0702 06:55:18.296299 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:18.300568 kubelet[3113]: I0702 06:55:18.296344 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05d94f2ce6a92591042367bbd7f705e4-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-4\" (UID: \"05d94f2ce6a92591042367bbd7f705e4\") " pod="kube-system/kube-scheduler-ip-172-31-18-4" Jul 2 06:55:18.300568 kubelet[3113]: I0702 06:55:18.296368 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9ad319acb4f860a5889567e21ce7c1e-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-4\" (UID: \"a9ad319acb4f860a5889567e21ce7c1e\") " pod="kube-system/kube-apiserver-ip-172-31-18-4" Jul 2 06:55:18.300568 kubelet[3113]: I0702 06:55:18.296415 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9ad319acb4f860a5889567e21ce7c1e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-4\" (UID: \"a9ad319acb4f860a5889567e21ce7c1e\") " pod="kube-system/kube-apiserver-ip-172-31-18-4" Jul 2 06:55:18.300568 kubelet[3113]: I0702 06:55:18.296441 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30742a74a3610ef683c4d0b0dd078dcb-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-4\" (UID: \"30742a74a3610ef683c4d0b0dd078dcb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-4" Jul 2 06:55:24.006975 sudo[2074]: pam_unix(sudo:session): session closed for user root Jul 2 06:55:24.005000 audit[2074]: USER_END pid=2074 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.015974 kernel: kauditd_printk_skb: 68 callbacks suppressed Jul 2 06:55:24.016648 kernel: audit: type=1106 audit(1719903324.005:387): pid=2074 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.016698 kernel: audit: type=1104 audit(1719903324.006:388): pid=2074 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.006000 audit[2074]: CRED_DISP pid=2074 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.047031 sshd[2071]: pam_unix(sshd:session): session closed for user core Jul 2 06:55:24.048000 audit[2071]: USER_END pid=2071 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:55:24.053751 kernel: audit: type=1106 audit(1719903324.048:389): pid=2071 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:55:24.048000 audit[2071]: CRED_DISP pid=2071 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:55:24.066504 kernel: audit: type=1104 audit(1719903324.048:390): pid=2071 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:55:24.061438 systemd[1]: sshd@6-172.31.18.4:22-139.178.89.65:51994.service: Deactivated successfully. Jul 2 06:55:24.063016 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 06:55:24.063192 systemd[1]: session-7.scope: Consumed 4.828s CPU time. Jul 2 06:55:24.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.4:22-139.178.89.65:51994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.067879 systemd-logind[1779]: Session 7 logged out. Waiting for processes to exit. Jul 2 06:55:24.069414 systemd-logind[1779]: Removed session 7. Jul 2 06:55:24.071936 kernel: audit: type=1131 audit(1719903324.052:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.4:22-139.178.89.65:51994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:27.672000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="nvme0n1p9" ino=7831 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jul 2 06:55:27.672000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000861e40 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:27.689161 kernel: audit: type=1400 audit(1719903327.672:392): avc: denied { watch } for pid=2810 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="nvme0n1p9" ino=7831 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jul 2 06:55:27.689305 kernel: audit: type=1300 audit(1719903327.672:392): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000861e40 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:55:27.672000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:27.693765 kernel: audit: type=1327 audit(1719903327.672:392): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:55:30.476603 kubelet[3113]: I0702 06:55:30.476573 3113 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 06:55:30.478872 containerd[1789]: time="2024-07-02T06:55:30.478808762Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 06:55:30.479742 kubelet[3113]: I0702 06:55:30.479719 3113 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 06:55:31.471689 kubelet[3113]: I0702 06:55:31.471640 3113 topology_manager.go:215] "Topology Admit Handler" podUID="ad72f1f8-5752-4971-bdad-ae8ba3440b77" podNamespace="kube-system" podName="kube-proxy-smtw6" Jul 2 06:55:31.480427 systemd[1]: Created slice kubepods-besteffort-podad72f1f8_5752_4971_bdad_ae8ba3440b77.slice - libcontainer container kubepods-besteffort-podad72f1f8_5752_4971_bdad_ae8ba3440b77.slice. Jul 2 06:55:31.500425 kubelet[3113]: I0702 06:55:31.500379 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad72f1f8-5752-4971-bdad-ae8ba3440b77-kube-proxy\") pod \"kube-proxy-smtw6\" (UID: \"ad72f1f8-5752-4971-bdad-ae8ba3440b77\") " pod="kube-system/kube-proxy-smtw6" Jul 2 06:55:31.500866 kubelet[3113]: I0702 06:55:31.500434 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad72f1f8-5752-4971-bdad-ae8ba3440b77-xtables-lock\") pod \"kube-proxy-smtw6\" (UID: \"ad72f1f8-5752-4971-bdad-ae8ba3440b77\") " pod="kube-system/kube-proxy-smtw6" Jul 2 06:55:31.500866 kubelet[3113]: I0702 06:55:31.500476 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad72f1f8-5752-4971-bdad-ae8ba3440b77-lib-modules\") pod \"kube-proxy-smtw6\" (UID: \"ad72f1f8-5752-4971-bdad-ae8ba3440b77\") " pod="kube-system/kube-proxy-smtw6" Jul 2 06:55:31.502719 kubelet[3113]: I0702 06:55:31.502675 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjvpf\" (UniqueName: \"kubernetes.io/projected/ad72f1f8-5752-4971-bdad-ae8ba3440b77-kube-api-access-kjvpf\") pod \"kube-proxy-smtw6\" (UID: \"ad72f1f8-5752-4971-bdad-ae8ba3440b77\") " pod="kube-system/kube-proxy-smtw6" Jul 2 06:55:31.564127 kubelet[3113]: I0702 06:55:31.564042 3113 topology_manager.go:215] "Topology Admit Handler" podUID="22eafe53-8e08-4fad-9c05-cc8a0fdd0965" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-g8g4z" Jul 2 06:55:31.571995 systemd[1]: Created slice kubepods-besteffort-pod22eafe53_8e08_4fad_9c05_cc8a0fdd0965.slice - libcontainer container kubepods-besteffort-pod22eafe53_8e08_4fad_9c05_cc8a0fdd0965.slice. Jul 2 06:55:31.602972 kubelet[3113]: I0702 06:55:31.602892 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22eafe53-8e08-4fad-9c05-cc8a0fdd0965-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-g8g4z\" (UID: \"22eafe53-8e08-4fad-9c05-cc8a0fdd0965\") " pod="tigera-operator/tigera-operator-76ff79f7fd-g8g4z" Jul 2 06:55:31.603224 kubelet[3113]: I0702 06:55:31.603206 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj478\" (UniqueName: \"kubernetes.io/projected/22eafe53-8e08-4fad-9c05-cc8a0fdd0965-kube-api-access-zj478\") pod \"tigera-operator-76ff79f7fd-g8g4z\" (UID: \"22eafe53-8e08-4fad-9c05-cc8a0fdd0965\") " pod="tigera-operator/tigera-operator-76ff79f7fd-g8g4z" Jul 2 06:55:31.791793 containerd[1789]: time="2024-07-02T06:55:31.791651665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smtw6,Uid:ad72f1f8-5752-4971-bdad-ae8ba3440b77,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:31.850840 containerd[1789]: time="2024-07-02T06:55:31.850636715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:31.850840 containerd[1789]: time="2024-07-02T06:55:31.850744250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:31.851081 containerd[1789]: time="2024-07-02T06:55:31.850777195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:31.851081 containerd[1789]: time="2024-07-02T06:55:31.850867733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:31.879159 containerd[1789]: time="2024-07-02T06:55:31.879105091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-g8g4z,Uid:22eafe53-8e08-4fad-9c05-cc8a0fdd0965,Namespace:tigera-operator,Attempt:0,}" Jul 2 06:55:31.887868 systemd[1]: Started cri-containerd-5dffde85b7f2413ec7eaf24f3a547d04f9bcc12b29594deeebb665067d068c26.scope - libcontainer container 5dffde85b7f2413ec7eaf24f3a547d04f9bcc12b29594deeebb665067d068c26. Jul 2 06:55:31.908531 kernel: audit: type=1334 audit(1719903331.903:393): prog-id=99 op=LOAD Jul 2 06:55:31.908661 kernel: audit: type=1334 audit(1719903331.904:394): prog-id=100 op=LOAD Jul 2 06:55:31.908696 kernel: audit: type=1300 audit(1719903331.904:394): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3199 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:31.903000 audit: BPF prog-id=99 op=LOAD Jul 2 06:55:31.904000 audit: BPF prog-id=100 op=LOAD Jul 2 06:55:31.904000 audit[3209]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3199 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:31.914684 kernel: audit: type=1327 audit(1719903331.904:394): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564666664653835623766323431336563376561663234663361353437 Jul 2 06:55:31.916131 kernel: audit: type=1334 audit(1719903331.904:395): prog-id=101 op=LOAD Jul 2 06:55:31.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564666664653835623766323431336563376561663234663361353437 Jul 2 06:55:31.904000 audit: BPF prog-id=101 op=LOAD Jul 2 06:55:31.904000 audit[3209]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3199 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:31.920896 kernel: audit: type=1300 audit(1719903331.904:395): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3199 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:31.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564666664653835623766323431336563376561663234663361353437 Jul 2 06:55:31.925556 kernel: audit: type=1327 audit(1719903331.904:395): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564666664653835623766323431336563376561663234663361353437 Jul 2 06:55:31.925664 kernel: audit: type=1334 audit(1719903331.904:396): prog-id=101 op=UNLOAD Jul 2 06:55:31.904000 audit: BPF prog-id=101 op=UNLOAD Jul 2 06:55:31.904000 audit: BPF prog-id=100 op=UNLOAD Jul 2 06:55:31.927006 kernel: audit: type=1334 audit(1719903331.904:397): prog-id=100 op=UNLOAD Jul 2 06:55:31.905000 audit: BPF prog-id=102 op=LOAD Jul 2 06:55:31.930351 kernel: audit: type=1334 audit(1719903331.905:398): prog-id=102 op=LOAD Jul 2 06:55:31.905000 audit[3209]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3199 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:31.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564666664653835623766323431336563376561663234663361353437 Jul 2 06:55:31.943467 containerd[1789]: time="2024-07-02T06:55:31.943388247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smtw6,Uid:ad72f1f8-5752-4971-bdad-ae8ba3440b77,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dffde85b7f2413ec7eaf24f3a547d04f9bcc12b29594deeebb665067d068c26\"" Jul 2 06:55:31.949762 containerd[1789]: time="2024-07-02T06:55:31.949354495Z" level=info msg="CreateContainer within sandbox \"5dffde85b7f2413ec7eaf24f3a547d04f9bcc12b29594deeebb665067d068c26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 06:55:31.970068 containerd[1789]: time="2024-07-02T06:55:31.969961296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:31.970880 containerd[1789]: time="2024-07-02T06:55:31.970820945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:31.989955 containerd[1789]: time="2024-07-02T06:55:31.970910466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:31.989955 containerd[1789]: time="2024-07-02T06:55:31.970960087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:32.002238 containerd[1789]: time="2024-07-02T06:55:32.002175847Z" level=info msg="CreateContainer within sandbox \"5dffde85b7f2413ec7eaf24f3a547d04f9bcc12b29594deeebb665067d068c26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84635761c80c0c6c5259a83ad5a1e0ebd5e5f6601a1e48c496ca7b65afe8a7e3\"" Jul 2 06:55:32.012427 containerd[1789]: time="2024-07-02T06:55:32.010232710Z" level=info msg="StartContainer for \"84635761c80c0c6c5259a83ad5a1e0ebd5e5f6601a1e48c496ca7b65afe8a7e3\"" Jul 2 06:55:32.011903 systemd[1]: Started cri-containerd-b441cf35075130d1e0151f943210f30938f1826c8d4d8545b9aa2711f7f10cbb.scope - libcontainer container b441cf35075130d1e0151f943210f30938f1826c8d4d8545b9aa2711f7f10cbb. Jul 2 06:55:32.036000 audit: BPF prog-id=103 op=LOAD Jul 2 06:55:32.037000 audit: BPF prog-id=104 op=LOAD Jul 2 06:55:32.037000 audit[3249]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3239 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234343163663335303735313330643165303135316639343332313066 Jul 2 06:55:32.038000 audit: BPF prog-id=105 op=LOAD Jul 2 06:55:32.038000 audit[3249]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3239 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234343163663335303735313330643165303135316639343332313066 Jul 2 06:55:32.038000 audit: BPF prog-id=105 op=UNLOAD Jul 2 06:55:32.038000 audit: BPF prog-id=104 op=UNLOAD Jul 2 06:55:32.038000 audit: BPF prog-id=106 op=LOAD Jul 2 06:55:32.038000 audit[3249]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3239 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234343163663335303735313330643165303135316639343332313066 Jul 2 06:55:32.070763 systemd[1]: Started cri-containerd-84635761c80c0c6c5259a83ad5a1e0ebd5e5f6601a1e48c496ca7b65afe8a7e3.scope - libcontainer container 84635761c80c0c6c5259a83ad5a1e0ebd5e5f6601a1e48c496ca7b65afe8a7e3. Jul 2 06:55:32.117000 audit: BPF prog-id=107 op=LOAD Jul 2 06:55:32.117000 audit[3273]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3199 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834363335373631633830633063366335323539613833616435613165 Jul 2 06:55:32.117000 audit: BPF prog-id=108 op=LOAD Jul 2 06:55:32.117000 audit[3273]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3199 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834363335373631633830633063366335323539613833616435613165 Jul 2 06:55:32.117000 audit: BPF prog-id=108 op=UNLOAD Jul 2 06:55:32.117000 audit: BPF prog-id=107 op=UNLOAD Jul 2 06:55:32.117000 audit: BPF prog-id=109 op=LOAD Jul 2 06:55:32.117000 audit[3273]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3199 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834363335373631633830633063366335323539613833616435613165 Jul 2 06:55:32.140919 containerd[1789]: time="2024-07-02T06:55:32.139188552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-g8g4z,Uid:22eafe53-8e08-4fad-9c05-cc8a0fdd0965,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b441cf35075130d1e0151f943210f30938f1826c8d4d8545b9aa2711f7f10cbb\"" Jul 2 06:55:32.155548 containerd[1789]: time="2024-07-02T06:55:32.155475473Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 06:55:32.191381 containerd[1789]: time="2024-07-02T06:55:32.191297834Z" level=info msg="StartContainer for \"84635761c80c0c6c5259a83ad5a1e0ebd5e5f6601a1e48c496ca7b65afe8a7e3\" returns successfully" Jul 2 06:55:32.630802 systemd[1]: run-containerd-runc-k8s.io-5dffde85b7f2413ec7eaf24f3a547d04f9bcc12b29594deeebb665067d068c26-runc.4i4yHP.mount: Deactivated successfully. Jul 2 06:55:32.741000 audit[3334]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.741000 audit[3334]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9a4f7500 a2=0 a3=7ffc9a4f74ec items=0 ppid=3284 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 06:55:32.743000 audit[3335]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3335 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:32.743000 audit[3335]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbfc7a790 a2=0 a3=7ffcbfc7a77c items=0 ppid=3284 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 06:55:32.744000 audit[3336]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.744000 audit[3336]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5e458810 a2=0 a3=7ffe5e4587fc items=0 ppid=3284 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.744000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 06:55:32.745000 audit[3337]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=3337 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:32.745000 audit[3337]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc81502820 a2=0 a3=7ffc8150280c items=0 ppid=3284 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 06:55:32.747000 audit[3338]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=3338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:32.747000 audit[3338]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf4097db0 a2=0 a3=7ffdf4097d9c items=0 ppid=3284 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 06:55:32.750000 audit[3339]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.750000 audit[3339]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc32bdc930 a2=0 a3=7ffc32bdc91c items=0 ppid=3284 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 06:55:32.855000 audit[3340]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.855000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffcd7ba000 a2=0 a3=7fffcd7b9fec items=0 ppid=3284 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 06:55:32.861000 audit[3342]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.861000 audit[3342]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcc31922b0 a2=0 a3=7ffcc319229c items=0 ppid=3284 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.861000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 2 06:55:32.869000 audit[3345]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.869000 audit[3345]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdd9239450 a2=0 a3=7ffdd923943c items=0 ppid=3284 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 2 06:55:32.870000 audit[3346]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.870000 audit[3346]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc8405a80 a2=0 a3=7fffc8405a6c items=0 ppid=3284 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 06:55:32.874000 audit[3348]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.874000 audit[3348]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe58598150 a2=0 a3=7ffe5859813c items=0 ppid=3284 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.874000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 06:55:32.876000 audit[3349]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.876000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4f515bd0 a2=0 a3=7ffd4f515bbc items=0 ppid=3284 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 06:55:32.899000 audit[3351]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.899000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc5d800020 a2=0 a3=7ffc5d80000c items=0 ppid=3284 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 06:55:32.906000 audit[3354]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.906000 audit[3354]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe202cbbb0 a2=0 a3=7ffe202cbb9c items=0 ppid=3284 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 2 06:55:32.910000 audit[3355]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3355 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.910000 audit[3355]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeccf7a920 a2=0 a3=7ffeccf7a90c items=0 ppid=3284 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 06:55:32.913000 audit[3357]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.913000 audit[3357]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe15420f40 a2=0 a3=7ffe15420f2c items=0 ppid=3284 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 06:55:32.915000 audit[3358]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.915000 audit[3358]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0b679ce0 a2=0 a3=7ffe0b679ccc items=0 ppid=3284 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 06:55:32.927000 audit[3360]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.927000 audit[3360]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe29186b80 a2=0 a3=7ffe29186b6c items=0 ppid=3284 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 06:55:32.954000 audit[3363]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.954000 audit[3363]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdef898d00 a2=0 a3=7ffdef898cec items=0 ppid=3284 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.954000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 06:55:32.966000 audit[3366]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.966000 audit[3366]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc16a207d0 a2=0 a3=7ffc16a207bc items=0 ppid=3284 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 06:55:32.968000 audit[3367]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.968000 audit[3367]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1c3e4540 a2=0 a3=7ffd1c3e452c items=0 ppid=3284 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 06:55:32.973000 audit[3369]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.973000 audit[3369]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc32bc4e50 a2=0 a3=7ffc32bc4e3c items=0 ppid=3284 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:55:32.980000 audit[3372]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.980000 audit[3372]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff93e7c1e0 a2=0 a3=7fff93e7c1cc items=0 ppid=3284 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.980000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:55:32.982000 audit[3373]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.982000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd08d18ad0 a2=0 a3=7ffd08d18abc items=0 ppid=3284 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 06:55:32.986000 audit[3375]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:55:32.986000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcbc7d1bf0 a2=0 a3=7ffcbc7d1bdc items=0 ppid=3284 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:32.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 06:55:33.016000 audit[3381]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:33.016000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffeb4a3dad0 a2=0 a3=7ffeb4a3dabc items=0 ppid=3284 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:33.034000 audit[3381]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:33.034000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffeb4a3dad0 a2=0 a3=7ffeb4a3dabc items=0 ppid=3284 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:33.046000 audit[3386]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.046000 audit[3386]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe59491630 a2=0 a3=7ffe5949161c items=0 ppid=3284 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.046000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 06:55:33.066000 audit[3388]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.066000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe1cb3dd80 a2=0 a3=7ffe1cb3dd6c items=0 ppid=3284 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 2 06:55:33.114000 audit[3391]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.114000 audit[3391]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe912aa1d0 a2=0 a3=7ffe912aa1bc items=0 ppid=3284 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.114000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 2 06:55:33.124000 audit[3392]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.124000 audit[3392]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd994ef2c0 a2=0 a3=7ffd994ef2ac items=0 ppid=3284 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.124000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 06:55:33.129000 audit[3394]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.129000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffee848bd0 a2=0 a3=7fffee848bbc items=0 ppid=3284 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 06:55:33.143000 audit[3395]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.143000 audit[3395]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8eaf1630 a2=0 a3=7ffc8eaf161c items=0 ppid=3284 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 06:55:33.166000 audit[3397]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.166000 audit[3397]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff105d54a0 a2=0 a3=7fff105d548c items=0 ppid=3284 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.166000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 2 06:55:33.176000 audit[3400]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.176000 audit[3400]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcf42a93f0 a2=0 a3=7ffcf42a93dc items=0 ppid=3284 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 06:55:33.177000 audit[3401]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.177000 audit[3401]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc00454550 a2=0 a3=7ffc0045453c items=0 ppid=3284 pid=3401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.177000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 06:55:33.182000 audit[3403]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.182000 audit[3403]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd30678110 a2=0 a3=7ffd306780fc items=0 ppid=3284 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.182000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 06:55:33.186000 audit[3404]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.186000 audit[3404]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaccea590 a2=0 a3=7ffdaccea57c items=0 ppid=3284 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 06:55:33.191000 audit[3406]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.191000 audit[3406]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9d7b8f30 a2=0 a3=7ffe9d7b8f1c items=0 ppid=3284 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 06:55:33.202000 audit[3409]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.202000 audit[3409]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5e6cf600 a2=0 a3=7ffc5e6cf5ec items=0 ppid=3284 pid=3409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.202000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 06:55:33.209000 audit[3412]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.209000 audit[3412]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6e160470 a2=0 a3=7ffd6e16045c items=0 ppid=3284 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.209000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 2 06:55:33.212000 audit[3413]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.212000 audit[3413]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3a04e600 a2=0 a3=7ffc3a04e5ec items=0 ppid=3284 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 06:55:33.215000 audit[3415]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.215000 audit[3415]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff7f4018a0 a2=0 a3=7fff7f40188c items=0 ppid=3284 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.215000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:55:33.219000 audit[3418]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.219000 audit[3418]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd84320d90 a2=0 a3=7ffd84320d7c items=0 ppid=3284 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:55:33.221000 audit[3419]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.221000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff30a4a3a0 a2=0 a3=7fff30a4a38c items=0 ppid=3284 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 06:55:33.224000 audit[3421]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.224000 audit[3421]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffce2f6f9f0 a2=0 a3=7ffce2f6f9dc items=0 ppid=3284 pid=3421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 06:55:33.226000 audit[3422]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.226000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9b3b11a0 a2=0 a3=7ffc9b3b118c items=0 ppid=3284 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.226000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 06:55:33.230000 audit[3424]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.230000 audit[3424]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcdfad8840 a2=0 a3=7ffcdfad882c items=0 ppid=3284 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:55:33.235000 audit[3427]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:55:33.235000 audit[3427]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf8276e10 a2=0 a3=7ffcf8276dfc items=0 ppid=3284 pid=3427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.235000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:55:33.239000 audit[3429]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 06:55:33.239000 audit[3429]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7fffcf9cf260 a2=0 a3=7fffcf9cf24c items=0 ppid=3284 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.239000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:33.240000 audit[3429]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 06:55:33.240000 audit[3429]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffcf9cf260 a2=0 a3=7fffcf9cf24c items=0 ppid=3284 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:33.240000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:33.608140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913941420.mount: Deactivated successfully. Jul 2 06:55:34.566966 containerd[1789]: time="2024-07-02T06:55:34.566917126Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:34.568711 containerd[1789]: time="2024-07-02T06:55:34.568651790Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076064" Jul 2 06:55:34.571134 containerd[1789]: time="2024-07-02T06:55:34.571054954Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:34.574180 containerd[1789]: time="2024-07-02T06:55:34.574140432Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:34.577004 containerd[1789]: time="2024-07-02T06:55:34.576961632Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:34.578173 containerd[1789]: time="2024-07-02T06:55:34.578130459Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.42238429s" Jul 2 06:55:34.578329 containerd[1789]: time="2024-07-02T06:55:34.578304275Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 06:55:34.586871 containerd[1789]: time="2024-07-02T06:55:34.586824865Z" level=info msg="CreateContainer within sandbox \"b441cf35075130d1e0151f943210f30938f1826c8d4d8545b9aa2711f7f10cbb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 06:55:34.606689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197103398.mount: Deactivated successfully. Jul 2 06:55:34.621188 containerd[1789]: time="2024-07-02T06:55:34.621114104Z" level=info msg="CreateContainer within sandbox \"b441cf35075130d1e0151f943210f30938f1826c8d4d8545b9aa2711f7f10cbb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d\"" Jul 2 06:55:34.623286 containerd[1789]: time="2024-07-02T06:55:34.622038341Z" level=info msg="StartContainer for \"a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d\"" Jul 2 06:55:34.689718 systemd[1]: Started cri-containerd-a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d.scope - libcontainer container a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d. Jul 2 06:55:34.702000 audit: BPF prog-id=110 op=LOAD Jul 2 06:55:34.703000 audit: BPF prog-id=111 op=LOAD Jul 2 06:55:34.703000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3239 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:34.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137613536303764613237323266623733633835386338323636646638 Jul 2 06:55:34.703000 audit: BPF prog-id=112 op=LOAD Jul 2 06:55:34.703000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3239 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:34.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137613536303764613237323266623733633835386338323636646638 Jul 2 06:55:34.703000 audit: BPF prog-id=112 op=UNLOAD Jul 2 06:55:34.703000 audit: BPF prog-id=111 op=UNLOAD Jul 2 06:55:34.703000 audit: BPF prog-id=113 op=LOAD Jul 2 06:55:34.703000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3239 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:34.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137613536303764613237323266623733633835386338323636646638 Jul 2 06:55:34.759054 containerd[1789]: time="2024-07-02T06:55:34.759002700Z" level=info msg="StartContainer for \"a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d\" returns successfully" Jul 2 06:55:35.124592 kubelet[3113]: I0702 06:55:35.124534 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-smtw6" podStartSLOduration=4.124511281 podStartE2EDuration="4.124511281s" podCreationTimestamp="2024-07-02 06:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:55:33.167508973 +0000 UTC m=+15.458380839" watchObservedRunningTime="2024-07-02 06:55:35.124511281 +0000 UTC m=+17.415383129" Jul 2 06:55:37.861000 audit[3480]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:37.865397 kernel: kauditd_printk_skb: 190 callbacks suppressed Jul 2 06:55:37.865462 kernel: audit: type=1325 audit(1719903337.861:467): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:37.865516 kernel: audit: type=1300 audit(1719903337.861:467): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff3a025f90 a2=0 a3=7fff3a025f7c items=0 ppid=3284 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:37.861000 audit[3480]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff3a025f90 a2=0 a3=7fff3a025f7c items=0 ppid=3284 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:37.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:37.875602 kernel: audit: type=1327 audit(1719903337.861:467): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:37.862000 audit[3480]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:37.881514 kernel: audit: type=1325 audit(1719903337.862:468): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:37.862000 audit[3480]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3a025f90 a2=0 a3=0 items=0 ppid=3284 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:37.887474 kernel: audit: type=1300 audit(1719903337.862:468): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3a025f90 a2=0 a3=0 items=0 ppid=3284 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:37.887618 kernel: audit: type=1327 audit(1719903337.862:468): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:37.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:37.887000 audit[3482]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:37.887000 audit[3482]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffeaf4021a0 a2=0 a3=7ffeaf40218c items=0 ppid=3284 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:37.893366 kernel: audit: type=1325 audit(1719903337.887:469): table=filter:91 family=2 entries=16 op=nft_register_rule pid=3482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:37.893433 kernel: audit: type=1300 audit(1719903337.887:469): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffeaf4021a0 a2=0 a3=7ffeaf40218c items=0 ppid=3284 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:37.893464 kernel: audit: type=1327 audit(1719903337.887:469): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:37.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:37.887000 audit[3482]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:37.896569 kernel: audit: type=1325 audit(1719903337.887:470): table=nat:92 family=2 entries=12 op=nft_register_rule pid=3482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:37.887000 audit[3482]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeaf4021a0 a2=0 a3=0 items=0 ppid=3284 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:37.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:37.989845 kubelet[3113]: I0702 06:55:37.989784 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-g8g4z" podStartSLOduration=4.560749855 podStartE2EDuration="6.989762463s" podCreationTimestamp="2024-07-02 06:55:31 +0000 UTC" firstStartedPulling="2024-07-02 06:55:32.154384456 +0000 UTC m=+14.445256280" lastFinishedPulling="2024-07-02 06:55:34.58339706 +0000 UTC m=+16.874268888" observedRunningTime="2024-07-02 06:55:35.12645038 +0000 UTC m=+17.417322229" watchObservedRunningTime="2024-07-02 06:55:37.989762463 +0000 UTC m=+20.280634313" Jul 2 06:55:38.032971 kubelet[3113]: I0702 06:55:38.032924 3113 topology_manager.go:215] "Topology Admit Handler" podUID="ef846a94-e752-4256-892b-4fc38b104ccc" podNamespace="calico-system" podName="calico-typha-bd9b9fb97-849sz" Jul 2 06:55:38.041167 systemd[1]: Created slice kubepods-besteffort-podef846a94_e752_4256_892b_4fc38b104ccc.slice - libcontainer container kubepods-besteffort-podef846a94_e752_4256_892b_4fc38b104ccc.slice. Jul 2 06:55:38.162228 kubelet[3113]: I0702 06:55:38.162089 3113 topology_manager.go:215] "Topology Admit Handler" podUID="ff769e49-bb72-4062-a34a-86d1629842de" podNamespace="calico-system" podName="calico-node-w97jh" Jul 2 06:55:38.173461 systemd[1]: Created slice kubepods-besteffort-podff769e49_bb72_4062_a34a_86d1629842de.slice - libcontainer container kubepods-besteffort-podff769e49_bb72_4062_a34a_86d1629842de.slice. Jul 2 06:55:38.184302 kubelet[3113]: I0702 06:55:38.184251 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ef846a94-e752-4256-892b-4fc38b104ccc-typha-certs\") pod \"calico-typha-bd9b9fb97-849sz\" (UID: \"ef846a94-e752-4256-892b-4fc38b104ccc\") " pod="calico-system/calico-typha-bd9b9fb97-849sz" Jul 2 06:55:38.184565 kubelet[3113]: I0702 06:55:38.184546 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccr2d\" (UniqueName: \"kubernetes.io/projected/ef846a94-e752-4256-892b-4fc38b104ccc-kube-api-access-ccr2d\") pod \"calico-typha-bd9b9fb97-849sz\" (UID: \"ef846a94-e752-4256-892b-4fc38b104ccc\") " pod="calico-system/calico-typha-bd9b9fb97-849sz" Jul 2 06:55:38.184759 kubelet[3113]: I0702 06:55:38.184743 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef846a94-e752-4256-892b-4fc38b104ccc-tigera-ca-bundle\") pod \"calico-typha-bd9b9fb97-849sz\" (UID: \"ef846a94-e752-4256-892b-4fc38b104ccc\") " pod="calico-system/calico-typha-bd9b9fb97-849sz" Jul 2 06:55:38.281317 kubelet[3113]: I0702 06:55:38.281273 3113 topology_manager.go:215] "Topology Admit Handler" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" podNamespace="calico-system" podName="csi-node-driver-j88n9" Jul 2 06:55:38.281805 kubelet[3113]: E0702 06:55:38.281767 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j88n9" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" Jul 2 06:55:38.285811 kubelet[3113]: I0702 06:55:38.285768 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-var-lib-calico\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286414 kubelet[3113]: I0702 06:55:38.285980 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-net-dir\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286414 kubelet[3113]: I0702 06:55:38.286019 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-bin-dir\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286414 kubelet[3113]: I0702 06:55:38.286092 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ff769e49-bb72-4062-a34a-86d1629842de-node-certs\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286414 kubelet[3113]: I0702 06:55:38.286302 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-lib-modules\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286414 kubelet[3113]: I0702 06:55:38.286334 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-xtables-lock\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286981 kubelet[3113]: I0702 06:55:38.286396 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-policysync\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286981 kubelet[3113]: I0702 06:55:38.286522 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-flexvol-driver-host\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286981 kubelet[3113]: I0702 06:55:38.286594 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff769e49-bb72-4062-a34a-86d1629842de-tigera-ca-bundle\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286981 kubelet[3113]: I0702 06:55:38.286660 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-log-dir\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.286981 kubelet[3113]: I0702 06:55:38.286874 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26lrl\" (UniqueName: \"kubernetes.io/projected/ff769e49-bb72-4062-a34a-86d1629842de-kube-api-access-26lrl\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.287597 kubelet[3113]: I0702 06:55:38.286906 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-var-run-calico\") pod \"calico-node-w97jh\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " pod="calico-system/calico-node-w97jh" Jul 2 06:55:38.347067 containerd[1789]: time="2024-07-02T06:55:38.347023038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd9b9fb97-849sz,Uid:ef846a94-e752-4256-892b-4fc38b104ccc,Namespace:calico-system,Attempt:0,}" Jul 2 06:55:38.387598 kubelet[3113]: I0702 06:55:38.387531 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8-kubelet-dir\") pod \"csi-node-driver-j88n9\" (UID: \"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8\") " pod="calico-system/csi-node-driver-j88n9" Jul 2 06:55:38.387598 kubelet[3113]: I0702 06:55:38.387586 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpn9l\" (UniqueName: \"kubernetes.io/projected/dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8-kube-api-access-vpn9l\") pod \"csi-node-driver-j88n9\" (UID: \"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8\") " pod="calico-system/csi-node-driver-j88n9" Jul 2 06:55:38.387839 kubelet[3113]: I0702 06:55:38.387674 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8-varrun\") pod \"csi-node-driver-j88n9\" (UID: \"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8\") " pod="calico-system/csi-node-driver-j88n9" Jul 2 06:55:38.387839 kubelet[3113]: I0702 06:55:38.387743 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8-registration-dir\") pod \"csi-node-driver-j88n9\" (UID: \"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8\") " pod="calico-system/csi-node-driver-j88n9" Jul 2 06:55:38.387930 kubelet[3113]: I0702 06:55:38.387826 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8-socket-dir\") pod \"csi-node-driver-j88n9\" (UID: \"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8\") " pod="calico-system/csi-node-driver-j88n9" Jul 2 06:55:38.394965 kubelet[3113]: E0702 06:55:38.394919 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.394965 kubelet[3113]: W0702 06:55:38.394966 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.431369 kubelet[3113]: E0702 06:55:38.394997 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.431369 kubelet[3113]: E0702 06:55:38.396443 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.431369 kubelet[3113]: W0702 06:55:38.396464 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.431369 kubelet[3113]: E0702 06:55:38.396579 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.431369 kubelet[3113]: E0702 06:55:38.397133 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.431369 kubelet[3113]: W0702 06:55:38.397149 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.431369 kubelet[3113]: E0702 06:55:38.397255 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.431369 kubelet[3113]: E0702 06:55:38.397603 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.431369 kubelet[3113]: W0702 06:55:38.397615 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.431369 kubelet[3113]: E0702 06:55:38.397705 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.431875 kubelet[3113]: E0702 06:55:38.415826 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.431875 kubelet[3113]: W0702 06:55:38.415847 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.431875 kubelet[3113]: E0702 06:55:38.415993 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.431875 kubelet[3113]: E0702 06:55:38.417521 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.431875 kubelet[3113]: W0702 06:55:38.417581 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.431875 kubelet[3113]: E0702 06:55:38.417728 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.431875 kubelet[3113]: E0702 06:55:38.418540 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.431875 kubelet[3113]: W0702 06:55:38.418817 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.431875 kubelet[3113]: E0702 06:55:38.419454 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.431875 kubelet[3113]: E0702 06:55:38.419898 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.432461 kubelet[3113]: W0702 06:55:38.419912 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.432461 kubelet[3113]: E0702 06:55:38.420265 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.432461 kubelet[3113]: E0702 06:55:38.420431 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.432461 kubelet[3113]: W0702 06:55:38.421048 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.432461 kubelet[3113]: E0702 06:55:38.421161 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.432461 kubelet[3113]: E0702 06:55:38.421330 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.432461 kubelet[3113]: W0702 06:55:38.421340 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.432461 kubelet[3113]: E0702 06:55:38.425355 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.432461 kubelet[3113]: E0702 06:55:38.425782 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.432461 kubelet[3113]: W0702 06:55:38.425825 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.435453 kubelet[3113]: E0702 06:55:38.425852 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.435453 kubelet[3113]: E0702 06:55:38.431272 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.435453 kubelet[3113]: W0702 06:55:38.431295 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.435453 kubelet[3113]: E0702 06:55:38.431429 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.435453 kubelet[3113]: E0702 06:55:38.431639 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.435453 kubelet[3113]: W0702 06:55:38.431651 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.435453 kubelet[3113]: E0702 06:55:38.431666 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.435453 kubelet[3113]: E0702 06:55:38.432234 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.435453 kubelet[3113]: W0702 06:55:38.432248 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.435453 kubelet[3113]: E0702 06:55:38.432265 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.469769 kubelet[3113]: E0702 06:55:38.469728 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.469769 kubelet[3113]: W0702 06:55:38.469761 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.470118 kubelet[3113]: E0702 06:55:38.469793 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.471184 containerd[1789]: time="2024-07-02T06:55:38.470800507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:38.471184 containerd[1789]: time="2024-07-02T06:55:38.470881245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:38.475663 containerd[1789]: time="2024-07-02T06:55:38.475555822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:38.475663 containerd[1789]: time="2024-07-02T06:55:38.475631710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:38.480748 containerd[1789]: time="2024-07-02T06:55:38.480685548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w97jh,Uid:ff769e49-bb72-4062-a34a-86d1629842de,Namespace:calico-system,Attempt:0,}" Jul 2 06:55:38.488832 kubelet[3113]: E0702 06:55:38.488798 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.488832 kubelet[3113]: W0702 06:55:38.488822 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.489047 kubelet[3113]: E0702 06:55:38.488844 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.492976 kubelet[3113]: E0702 06:55:38.492915 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.492976 kubelet[3113]: W0702 06:55:38.492974 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.494064 kubelet[3113]: E0702 06:55:38.493002 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.494064 kubelet[3113]: E0702 06:55:38.493788 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.494064 kubelet[3113]: W0702 06:55:38.493802 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.494064 kubelet[3113]: E0702 06:55:38.493821 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.494812 kubelet[3113]: E0702 06:55:38.494788 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.494812 kubelet[3113]: W0702 06:55:38.494807 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.495286 kubelet[3113]: E0702 06:55:38.495257 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.496424 kubelet[3113]: E0702 06:55:38.496400 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.496424 kubelet[3113]: W0702 06:55:38.496419 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.496869 kubelet[3113]: E0702 06:55:38.496846 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.497573 kubelet[3113]: E0702 06:55:38.497477 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.497573 kubelet[3113]: W0702 06:55:38.497518 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.497718 kubelet[3113]: E0702 06:55:38.497625 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.498347 kubelet[3113]: E0702 06:55:38.498326 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.498347 kubelet[3113]: W0702 06:55:38.498342 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.499010 kubelet[3113]: E0702 06:55:38.498976 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.499234 kubelet[3113]: E0702 06:55:38.499168 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.499609 kubelet[3113]: W0702 06:55:38.499179 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.499853 kubelet[3113]: E0702 06:55:38.499807 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.500245 kubelet[3113]: E0702 06:55:38.500225 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.500245 kubelet[3113]: W0702 06:55:38.500242 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.500979 kubelet[3113]: E0702 06:55:38.500894 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.501066 kubelet[3113]: E0702 06:55:38.501018 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.501066 kubelet[3113]: W0702 06:55:38.501029 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.501187 kubelet[3113]: E0702 06:55:38.501123 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.501800 kubelet[3113]: E0702 06:55:38.501780 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.501800 kubelet[3113]: W0702 06:55:38.501795 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.502040 kubelet[3113]: E0702 06:55:38.502017 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.505543 kubelet[3113]: E0702 06:55:38.502256 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.505543 kubelet[3113]: W0702 06:55:38.502269 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.505543 kubelet[3113]: E0702 06:55:38.502371 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.505543 kubelet[3113]: E0702 06:55:38.502762 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.505543 kubelet[3113]: W0702 06:55:38.502879 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.505543 kubelet[3113]: E0702 06:55:38.502980 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.505543 kubelet[3113]: E0702 06:55:38.503118 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.505543 kubelet[3113]: W0702 06:55:38.503127 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.505543 kubelet[3113]: E0702 06:55:38.503217 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.505543 kubelet[3113]: E0702 06:55:38.503387 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.506962 kubelet[3113]: W0702 06:55:38.503396 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.506962 kubelet[3113]: E0702 06:55:38.503480 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.506962 kubelet[3113]: E0702 06:55:38.503675 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.506962 kubelet[3113]: W0702 06:55:38.503683 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.506962 kubelet[3113]: E0702 06:55:38.503766 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.506962 kubelet[3113]: E0702 06:55:38.503906 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.506962 kubelet[3113]: W0702 06:55:38.503915 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.506962 kubelet[3113]: E0702 06:55:38.504195 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.506962 kubelet[3113]: W0702 06:55:38.504205 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.506962 kubelet[3113]: E0702 06:55:38.504219 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.510935 kubelet[3113]: E0702 06:55:38.504368 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.510935 kubelet[3113]: E0702 06:55:38.504590 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.510935 kubelet[3113]: W0702 06:55:38.504603 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.510935 kubelet[3113]: E0702 06:55:38.504620 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.510935 kubelet[3113]: E0702 06:55:38.505020 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.510935 kubelet[3113]: W0702 06:55:38.505033 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.510935 kubelet[3113]: E0702 06:55:38.505051 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.510935 kubelet[3113]: E0702 06:55:38.505300 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.510935 kubelet[3113]: W0702 06:55:38.505309 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.510935 kubelet[3113]: E0702 06:55:38.505327 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.511360 kubelet[3113]: E0702 06:55:38.505700 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.511360 kubelet[3113]: W0702 06:55:38.505711 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.511360 kubelet[3113]: E0702 06:55:38.505729 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.511360 kubelet[3113]: E0702 06:55:38.506436 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.511360 kubelet[3113]: W0702 06:55:38.506448 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.511360 kubelet[3113]: E0702 06:55:38.506467 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.511360 kubelet[3113]: E0702 06:55:38.506899 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.511360 kubelet[3113]: W0702 06:55:38.506910 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.511360 kubelet[3113]: E0702 06:55:38.506923 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.511360 kubelet[3113]: E0702 06:55:38.507768 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.511835 kubelet[3113]: W0702 06:55:38.507780 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.511835 kubelet[3113]: E0702 06:55:38.507793 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.530783 systemd[1]: Started cri-containerd-44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0.scope - libcontainer container 44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0. Jul 2 06:55:38.586508 containerd[1789]: time="2024-07-02T06:55:38.584481556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:38.586508 containerd[1789]: time="2024-07-02T06:55:38.584625828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:38.586508 containerd[1789]: time="2024-07-02T06:55:38.584671791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:38.586508 containerd[1789]: time="2024-07-02T06:55:38.584701097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:38.596718 kubelet[3113]: E0702 06:55:38.596679 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:55:38.596718 kubelet[3113]: W0702 06:55:38.596718 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:55:38.596970 kubelet[3113]: E0702 06:55:38.596744 3113 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:55:38.631810 systemd[1]: Started cri-containerd-55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1.scope - libcontainer container 55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1. Jul 2 06:55:38.751000 audit: BPF prog-id=114 op=LOAD Jul 2 06:55:38.751000 audit: BPF prog-id=115 op=LOAD Jul 2 06:55:38.751000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3560 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:38.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663365316434613436336436376564343836643532623836623234 Jul 2 06:55:38.751000 audit: BPF prog-id=116 op=LOAD Jul 2 06:55:38.751000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3560 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:38.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663365316434613436336436376564343836643532623836623234 Jul 2 06:55:38.751000 audit: BPF prog-id=116 op=UNLOAD Jul 2 06:55:38.751000 audit: BPF prog-id=115 op=UNLOAD Jul 2 06:55:38.751000 audit: BPF prog-id=117 op=LOAD Jul 2 06:55:38.751000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3560 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:38.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663365316434613436336436376564343836643532623836623234 Jul 2 06:55:38.803929 containerd[1789]: time="2024-07-02T06:55:38.803876627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w97jh,Uid:ff769e49-bb72-4062-a34a-86d1629842de,Namespace:calico-system,Attempt:0,} returns sandbox id \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\"" Jul 2 06:55:38.805000 audit: BPF prog-id=118 op=LOAD Jul 2 06:55:38.811761 containerd[1789]: time="2024-07-02T06:55:38.811712782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 06:55:38.810000 audit: BPF prog-id=119 op=LOAD Jul 2 06:55:38.810000 audit[3519]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:38.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434343836623261383164663638643864663538323761313061646239 Jul 2 06:55:38.811000 audit: BPF prog-id=120 op=LOAD Jul 2 06:55:38.811000 audit[3519]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:38.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434343836623261383164663638643864663538323761313061646239 Jul 2 06:55:38.811000 audit: BPF prog-id=120 op=UNLOAD Jul 2 06:55:38.811000 audit: BPF prog-id=119 op=UNLOAD Jul 2 06:55:38.811000 audit: BPF prog-id=121 op=LOAD Jul 2 06:55:38.811000 audit[3519]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:38.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434343836623261383164663638643864663538323761313061646239 Jul 2 06:55:38.888965 containerd[1789]: time="2024-07-02T06:55:38.888915110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd9b9fb97-849sz,Uid:ef846a94-e752-4256-892b-4fc38b104ccc,Namespace:calico-system,Attempt:0,} returns sandbox id \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\"" Jul 2 06:55:38.907000 audit[3615]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=3615 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:38.907000 audit[3615]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffff32009a0 a2=0 a3=7ffff320098c items=0 ppid=3284 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:38.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:38.908000 audit[3615]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3615 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:38.908000 audit[3615]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffff32009a0 a2=0 a3=0 items=0 ppid=3284 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:38.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:39.920852 kubelet[3113]: E0702 06:55:39.920806 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j88n9" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" Jul 2 06:55:39.923000 audit[3617]: NETFILTER_CFG table=filter:95 family=2 entries=16 op=nft_register_rule pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:39.923000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff56eaed80 a2=0 a3=7fff56eaed6c items=0 ppid=3284 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:39.923000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:39.926000 audit[3617]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:39.926000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff56eaed80 a2=0 a3=0 items=0 ppid=3284 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:39.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:40.438825 containerd[1789]: time="2024-07-02T06:55:40.438773842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:40.442064 containerd[1789]: time="2024-07-02T06:55:40.441998656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 06:55:40.444196 containerd[1789]: time="2024-07-02T06:55:40.444150585Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:40.448595 containerd[1789]: time="2024-07-02T06:55:40.448552102Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:40.476395 containerd[1789]: time="2024-07-02T06:55:40.476335799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:40.479535 containerd[1789]: time="2024-07-02T06:55:40.476773585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.6649868s" Jul 2 06:55:40.479737 containerd[1789]: time="2024-07-02T06:55:40.479709323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 06:55:40.496511 containerd[1789]: time="2024-07-02T06:55:40.495112775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 06:55:40.499401 containerd[1789]: time="2024-07-02T06:55:40.499361066Z" level=info msg="CreateContainer within sandbox \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 06:55:40.588414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652836188.mount: Deactivated successfully. Jul 2 06:55:40.592888 containerd[1789]: time="2024-07-02T06:55:40.592838459Z" level=info msg="CreateContainer within sandbox \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7\"" Jul 2 06:55:40.593743 containerd[1789]: time="2024-07-02T06:55:40.593709317Z" level=info msg="StartContainer for \"404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7\"" Jul 2 06:55:40.639699 systemd[1]: Started cri-containerd-404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7.scope - libcontainer container 404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7. Jul 2 06:55:40.649326 systemd[1]: run-containerd-runc-k8s.io-404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7-runc.cllxOo.mount: Deactivated successfully. Jul 2 06:55:40.677000 audit: BPF prog-id=122 op=LOAD Jul 2 06:55:40.677000 audit[3631]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3560 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:40.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346239353636376332313832363434636438643437336530343563 Jul 2 06:55:40.677000 audit: BPF prog-id=123 op=LOAD Jul 2 06:55:40.677000 audit[3631]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3560 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:40.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346239353636376332313832363434636438643437336530343563 Jul 2 06:55:40.677000 audit: BPF prog-id=123 op=UNLOAD Jul 2 06:55:40.677000 audit: BPF prog-id=122 op=UNLOAD Jul 2 06:55:40.677000 audit: BPF prog-id=124 op=LOAD Jul 2 06:55:40.677000 audit[3631]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3560 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:40.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346239353636376332313832363434636438643437336530343563 Jul 2 06:55:40.705566 containerd[1789]: time="2024-07-02T06:55:40.704153834Z" level=info msg="StartContainer for \"404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7\" returns successfully" Jul 2 06:55:40.735391 systemd[1]: cri-containerd-404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7.scope: Deactivated successfully. Jul 2 06:55:40.737000 audit: BPF prog-id=124 op=UNLOAD Jul 2 06:55:41.046888 containerd[1789]: time="2024-07-02T06:55:41.046743583Z" level=info msg="shim disconnected" id=404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7 namespace=k8s.io Jul 2 06:55:41.046888 containerd[1789]: time="2024-07-02T06:55:41.046804128Z" level=warning msg="cleaning up after shim disconnected" id=404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7 namespace=k8s.io Jul 2 06:55:41.046888 containerd[1789]: time="2024-07-02T06:55:41.046816110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:41.141323 containerd[1789]: time="2024-07-02T06:55:41.141245435Z" level=info msg="StopPodSandbox for \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\"" Jul 2 06:55:41.165170 containerd[1789]: time="2024-07-02T06:55:41.165105300Z" level=info msg="Container to stop \"404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:55:41.194000 audit: BPF prog-id=114 op=UNLOAD Jul 2 06:55:41.195164 systemd[1]: cri-containerd-55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1.scope: Deactivated successfully. Jul 2 06:55:41.197000 audit: BPF prog-id=117 op=UNLOAD Jul 2 06:55:41.237691 containerd[1789]: time="2024-07-02T06:55:41.237592503Z" level=info msg="shim disconnected" id=55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1 namespace=k8s.io Jul 2 06:55:41.237691 containerd[1789]: time="2024-07-02T06:55:41.237680887Z" level=warning msg="cleaning up after shim disconnected" id=55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1 namespace=k8s.io Jul 2 06:55:41.238172 containerd[1789]: time="2024-07-02T06:55:41.237775426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:41.266363 containerd[1789]: time="2024-07-02T06:55:41.266316325Z" level=info msg="TearDown network for sandbox \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\" successfully" Jul 2 06:55:41.266363 containerd[1789]: time="2024-07-02T06:55:41.266355867Z" level=info msg="StopPodSandbox for \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\" returns successfully" Jul 2 06:55:41.448136 kubelet[3113]: I0702 06:55:41.448066 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-log-dir\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.448620 kubelet[3113]: I0702 06:55:41.448222 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff769e49-bb72-4062-a34a-86d1629842de-tigera-ca-bundle\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.448620 kubelet[3113]: I0702 06:55:41.448140 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:55:41.448620 kubelet[3113]: I0702 06:55:41.448306 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26lrl\" (UniqueName: \"kubernetes.io/projected/ff769e49-bb72-4062-a34a-86d1629842de-kube-api-access-26lrl\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449134 kubelet[3113]: I0702 06:55:41.448796 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff769e49-bb72-4062-a34a-86d1629842de-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 06:55:41.449253 kubelet[3113]: I0702 06:55:41.448860 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-xtables-lock\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449312 kubelet[3113]: I0702 06:55:41.449256 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-var-lib-calico\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449312 kubelet[3113]: I0702 06:55:41.449285 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-net-dir\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449402 kubelet[3113]: I0702 06:55:41.449310 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-bin-dir\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449402 kubelet[3113]: I0702 06:55:41.449336 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-var-run-calico\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449402 kubelet[3113]: I0702 06:55:41.449358 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-lib-modules\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449402 kubelet[3113]: I0702 06:55:41.449383 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-flexvol-driver-host\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449628 kubelet[3113]: I0702 06:55:41.449406 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-policysync\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449628 kubelet[3113]: I0702 06:55:41.449440 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ff769e49-bb72-4062-a34a-86d1629842de-node-certs\") pod \"ff769e49-bb72-4062-a34a-86d1629842de\" (UID: \"ff769e49-bb72-4062-a34a-86d1629842de\") " Jul 2 06:55:41.449628 kubelet[3113]: I0702 06:55:41.449540 3113 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-log-dir\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.449628 kubelet[3113]: I0702 06:55:41.449556 3113 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff769e49-bb72-4062-a34a-86d1629842de-tigera-ca-bundle\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.452244 kubelet[3113]: I0702 06:55:41.452212 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:55:41.452413 kubelet[3113]: I0702 06:55:41.452280 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:55:41.452413 kubelet[3113]: I0702 06:55:41.452305 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:55:41.452413 kubelet[3113]: I0702 06:55:41.452323 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:55:41.452413 kubelet[3113]: I0702 06:55:41.452341 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:55:41.452413 kubelet[3113]: I0702 06:55:41.452359 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:55:41.452746 kubelet[3113]: I0702 06:55:41.452376 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:55:41.452746 kubelet[3113]: I0702 06:55:41.452395 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-policysync" (OuterVolumeSpecName: "policysync") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:55:41.458161 kubelet[3113]: I0702 06:55:41.458112 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff769e49-bb72-4062-a34a-86d1629842de-kube-api-access-26lrl" (OuterVolumeSpecName: "kube-api-access-26lrl") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "kube-api-access-26lrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 06:55:41.458432 kubelet[3113]: I0702 06:55:41.458401 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff769e49-bb72-4062-a34a-86d1629842de-node-certs" (OuterVolumeSpecName: "node-certs") pod "ff769e49-bb72-4062-a34a-86d1629842de" (UID: "ff769e49-bb72-4062-a34a-86d1629842de"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 06:55:41.550787 kubelet[3113]: I0702 06:55:41.550752 3113 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-var-run-calico\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.550787 kubelet[3113]: I0702 06:55:41.550782 3113 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-lib-modules\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.550787 kubelet[3113]: I0702 06:55:41.550797 3113 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-flexvol-driver-host\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.551105 kubelet[3113]: I0702 06:55:41.550809 3113 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-policysync\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.551105 kubelet[3113]: I0702 06:55:41.550822 3113 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ff769e49-bb72-4062-a34a-86d1629842de-node-certs\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.551105 kubelet[3113]: I0702 06:55:41.550833 3113 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-26lrl\" (UniqueName: \"kubernetes.io/projected/ff769e49-bb72-4062-a34a-86d1629842de-kube-api-access-26lrl\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.551105 kubelet[3113]: I0702 06:55:41.550843 3113 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-xtables-lock\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.551105 kubelet[3113]: I0702 06:55:41.550852 3113 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-var-lib-calico\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.551105 kubelet[3113]: I0702 06:55:41.550861 3113 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-bin-dir\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.551105 kubelet[3113]: I0702 06:55:41.550872 3113 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ff769e49-bb72-4062-a34a-86d1629842de-cni-net-dir\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:41.579668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7-rootfs.mount: Deactivated successfully. Jul 2 06:55:41.580087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1-rootfs.mount: Deactivated successfully. Jul 2 06:55:41.580502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1-shm.mount: Deactivated successfully. Jul 2 06:55:41.580752 systemd[1]: var-lib-kubelet-pods-ff769e49\x2dbb72\x2d4062\x2da34a\x2d86d1629842de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d26lrl.mount: Deactivated successfully. Jul 2 06:55:41.581031 systemd[1]: var-lib-kubelet-pods-ff769e49\x2dbb72\x2d4062\x2da34a\x2d86d1629842de-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 06:55:41.921105 kubelet[3113]: E0702 06:55:41.917887 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j88n9" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" Jul 2 06:55:41.936276 systemd[1]: Removed slice kubepods-besteffort-podff769e49_bb72_4062_a34a_86d1629842de.slice - libcontainer container kubepods-besteffort-podff769e49_bb72_4062_a34a_86d1629842de.slice. Jul 2 06:55:42.145284 kubelet[3113]: I0702 06:55:42.144197 3113 scope.go:117] "RemoveContainer" containerID="404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7" Jul 2 06:55:42.150712 containerd[1789]: time="2024-07-02T06:55:42.150670732Z" level=info msg="RemoveContainer for \"404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7\"" Jul 2 06:55:42.177166 containerd[1789]: time="2024-07-02T06:55:42.177055912Z" level=info msg="RemoveContainer for \"404b95667c2182644cd8d473e045c2989182e6a6313419f31a55d9818a20bdf7\" returns successfully" Jul 2 06:55:42.305902 kubelet[3113]: I0702 06:55:42.305769 3113 topology_manager.go:215] "Topology Admit Handler" podUID="34af09eb-1c6b-4334-8668-c4ef165761b1" podNamespace="calico-system" podName="calico-node-jdqsw" Jul 2 06:55:42.306224 kubelet[3113]: E0702 06:55:42.305961 3113 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff769e49-bb72-4062-a34a-86d1629842de" containerName="flexvol-driver" Jul 2 06:55:42.306224 kubelet[3113]: I0702 06:55:42.306144 3113 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff769e49-bb72-4062-a34a-86d1629842de" containerName="flexvol-driver" Jul 2 06:55:42.314140 systemd[1]: Created slice kubepods-besteffort-pod34af09eb_1c6b_4334_8668_c4ef165761b1.slice - libcontainer container kubepods-besteffort-pod34af09eb_1c6b_4334_8668_c4ef165761b1.slice. Jul 2 06:55:42.356811 kubelet[3113]: I0702 06:55:42.356775 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/34af09eb-1c6b-4334-8668-c4ef165761b1-node-certs\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.356992 kubelet[3113]: I0702 06:55:42.356867 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/34af09eb-1c6b-4334-8668-c4ef165761b1-var-run-calico\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.356992 kubelet[3113]: I0702 06:55:42.356906 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/34af09eb-1c6b-4334-8668-c4ef165761b1-flexvol-driver-host\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.356992 kubelet[3113]: I0702 06:55:42.356970 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/34af09eb-1c6b-4334-8668-c4ef165761b1-cni-bin-dir\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.357162 kubelet[3113]: I0702 06:55:42.357022 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/34af09eb-1c6b-4334-8668-c4ef165761b1-cni-net-dir\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.357162 kubelet[3113]: I0702 06:55:42.357054 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34af09eb-1c6b-4334-8668-c4ef165761b1-var-lib-calico\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.357162 kubelet[3113]: I0702 06:55:42.357109 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34af09eb-1c6b-4334-8668-c4ef165761b1-xtables-lock\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.357309 kubelet[3113]: I0702 06:55:42.357135 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxqsp\" (UniqueName: \"kubernetes.io/projected/34af09eb-1c6b-4334-8668-c4ef165761b1-kube-api-access-gxqsp\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.357309 kubelet[3113]: I0702 06:55:42.357199 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34af09eb-1c6b-4334-8668-c4ef165761b1-lib-modules\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.357309 kubelet[3113]: I0702 06:55:42.357255 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/34af09eb-1c6b-4334-8668-c4ef165761b1-policysync\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.357462 kubelet[3113]: I0702 06:55:42.357281 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34af09eb-1c6b-4334-8668-c4ef165761b1-tigera-ca-bundle\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.357462 kubelet[3113]: I0702 06:55:42.357341 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/34af09eb-1c6b-4334-8668-c4ef165761b1-cni-log-dir\") pod \"calico-node-jdqsw\" (UID: \"34af09eb-1c6b-4334-8668-c4ef165761b1\") " pod="calico-system/calico-node-jdqsw" Jul 2 06:55:42.630167 containerd[1789]: time="2024-07-02T06:55:42.630121191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jdqsw,Uid:34af09eb-1c6b-4334-8668-c4ef165761b1,Namespace:calico-system,Attempt:0,}" Jul 2 06:55:42.729835 containerd[1789]: time="2024-07-02T06:55:42.729725565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:42.730146 containerd[1789]: time="2024-07-02T06:55:42.730097620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:42.730674 containerd[1789]: time="2024-07-02T06:55:42.730636054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:42.731080 containerd[1789]: time="2024-07-02T06:55:42.731039402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:42.854386 systemd[1]: run-containerd-runc-k8s.io-fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17-runc.ganxO6.mount: Deactivated successfully. Jul 2 06:55:42.863709 systemd[1]: Started cri-containerd-fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17.scope - libcontainer container fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17. Jul 2 06:55:42.912051 kernel: kauditd_printk_skb: 52 callbacks suppressed Jul 2 06:55:42.912225 kernel: audit: type=1334 audit(1719903342.902:495): prog-id=125 op=LOAD Jul 2 06:55:42.902000 audit: BPF prog-id=125 op=LOAD Jul 2 06:55:42.918000 audit: BPF prog-id=126 op=LOAD Jul 2 06:55:42.922080 kubelet[3113]: E0702 06:55:42.920200 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j88n9" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" Jul 2 06:55:42.926651 kernel: audit: type=1334 audit(1719903342.918:496): prog-id=126 op=LOAD Jul 2 06:55:42.926793 kernel: audit: type=1300 audit(1719903342.918:496): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3732 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:42.918000 audit[3742]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3732 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:42.930200 kernel: audit: type=1327 audit(1719903342.918:496): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353134643265616430616361376234383163353730633137303364 Jul 2 06:55:42.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353134643265616430616361376234383163353730633137303364 Jul 2 06:55:42.936178 kernel: audit: type=1334 audit(1719903342.929:497): prog-id=127 op=LOAD Jul 2 06:55:42.936294 kernel: audit: type=1300 audit(1719903342.929:497): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3732 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:42.929000 audit: BPF prog-id=127 op=LOAD Jul 2 06:55:42.929000 audit[3742]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3732 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:42.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353134643265616430616361376234383163353730633137303364 Jul 2 06:55:42.941565 kernel: audit: type=1327 audit(1719903342.929:497): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353134643265616430616361376234383163353730633137303364 Jul 2 06:55:42.946251 kernel: audit: type=1334 audit(1719903342.929:498): prog-id=127 op=UNLOAD Jul 2 06:55:42.946472 kernel: audit: type=1334 audit(1719903342.929:499): prog-id=126 op=UNLOAD Jul 2 06:55:42.946534 kernel: audit: type=1334 audit(1719903342.929:500): prog-id=128 op=LOAD Jul 2 06:55:42.929000 audit: BPF prog-id=127 op=UNLOAD Jul 2 06:55:42.929000 audit: BPF prog-id=126 op=UNLOAD Jul 2 06:55:42.929000 audit: BPF prog-id=128 op=LOAD Jul 2 06:55:42.929000 audit[3742]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3732 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:42.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353134643265616430616361376234383163353730633137303364 Jul 2 06:55:43.000904 containerd[1789]: time="2024-07-02T06:55:43.000856426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jdqsw,Uid:34af09eb-1c6b-4334-8668-c4ef165761b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17\"" Jul 2 06:55:43.007597 containerd[1789]: time="2024-07-02T06:55:43.007549742Z" level=info msg="CreateContainer within sandbox \"fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 06:55:43.036705 containerd[1789]: time="2024-07-02T06:55:43.036648665Z" level=info msg="CreateContainer within sandbox \"fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e904ccca7513cc3b8ef7aa02ef484665aa75bbec6a390ed7fb277263267820dd\"" Jul 2 06:55:43.038016 containerd[1789]: time="2024-07-02T06:55:43.037916514Z" level=info msg="StartContainer for \"e904ccca7513cc3b8ef7aa02ef484665aa75bbec6a390ed7fb277263267820dd\"" Jul 2 06:55:43.187826 systemd[1]: Started cri-containerd-e904ccca7513cc3b8ef7aa02ef484665aa75bbec6a390ed7fb277263267820dd.scope - libcontainer container e904ccca7513cc3b8ef7aa02ef484665aa75bbec6a390ed7fb277263267820dd. Jul 2 06:55:43.287000 audit: BPF prog-id=129 op=LOAD Jul 2 06:55:43.287000 audit[3773]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3732 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:43.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539303463636361373531336363336238656637616130326566343834 Jul 2 06:55:43.287000 audit: BPF prog-id=130 op=LOAD Jul 2 06:55:43.287000 audit[3773]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3732 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:43.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539303463636361373531336363336238656637616130326566343834 Jul 2 06:55:43.287000 audit: BPF prog-id=130 op=UNLOAD Jul 2 06:55:43.287000 audit: BPF prog-id=129 op=UNLOAD Jul 2 06:55:43.287000 audit: BPF prog-id=131 op=LOAD Jul 2 06:55:43.287000 audit[3773]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3732 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:43.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539303463636361373531336363336238656637616130326566343834 Jul 2 06:55:43.357242 containerd[1789]: time="2024-07-02T06:55:43.357187285Z" level=info msg="StartContainer for \"e904ccca7513cc3b8ef7aa02ef484665aa75bbec6a390ed7fb277263267820dd\" returns successfully" Jul 2 06:55:43.435224 systemd[1]: cri-containerd-e904ccca7513cc3b8ef7aa02ef484665aa75bbec6a390ed7fb277263267820dd.scope: Deactivated successfully. Jul 2 06:55:43.438000 audit: BPF prog-id=131 op=UNLOAD Jul 2 06:55:43.683476 containerd[1789]: time="2024-07-02T06:55:43.683411886Z" level=info msg="shim disconnected" id=e904ccca7513cc3b8ef7aa02ef484665aa75bbec6a390ed7fb277263267820dd namespace=k8s.io Jul 2 06:55:43.684055 containerd[1789]: time="2024-07-02T06:55:43.684012077Z" level=warning msg="cleaning up after shim disconnected" id=e904ccca7513cc3b8ef7aa02ef484665aa75bbec6a390ed7fb277263267820dd namespace=k8s.io Jul 2 06:55:43.684255 containerd[1789]: time="2024-07-02T06:55:43.684224875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:43.907223 containerd[1789]: time="2024-07-02T06:55:43.907173319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:43.910003 containerd[1789]: time="2024-07-02T06:55:43.909939549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 06:55:43.913104 containerd[1789]: time="2024-07-02T06:55:43.912280219Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:43.917141 containerd[1789]: time="2024-07-02T06:55:43.917101931Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:43.926419 kubelet[3113]: I0702 06:55:43.926382 3113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff769e49-bb72-4062-a34a-86d1629842de" path="/var/lib/kubelet/pods/ff769e49-bb72-4062-a34a-86d1629842de/volumes" Jul 2 06:55:43.931547 containerd[1789]: time="2024-07-02T06:55:43.929599241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.434386162s" Jul 2 06:55:43.931547 containerd[1789]: time="2024-07-02T06:55:43.929649174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 06:55:43.933700 containerd[1789]: time="2024-07-02T06:55:43.933596611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:43.958190 containerd[1789]: time="2024-07-02T06:55:43.958148774Z" level=info msg="CreateContainer within sandbox \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 06:55:44.051542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487836348.mount: Deactivated successfully. Jul 2 06:55:44.062345 containerd[1789]: time="2024-07-02T06:55:44.062289392Z" level=info msg="CreateContainer within sandbox \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\"" Jul 2 06:55:44.063298 containerd[1789]: time="2024-07-02T06:55:44.063257614Z" level=info msg="StartContainer for \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\"" Jul 2 06:55:44.134735 systemd[1]: Started cri-containerd-12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86.scope - libcontainer container 12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86. Jul 2 06:55:44.159000 audit: BPF prog-id=132 op=LOAD Jul 2 06:55:44.160000 audit: BPF prog-id=133 op=LOAD Jul 2 06:55:44.160000 audit[3838]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3508 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:44.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663034653432616430623236623735666332323036653638653331 Jul 2 06:55:44.160000 audit: BPF prog-id=134 op=LOAD Jul 2 06:55:44.160000 audit[3838]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3508 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:44.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663034653432616430623236623735666332323036653638653331 Jul 2 06:55:44.160000 audit: BPF prog-id=134 op=UNLOAD Jul 2 06:55:44.160000 audit: BPF prog-id=133 op=UNLOAD Jul 2 06:55:44.160000 audit: BPF prog-id=135 op=LOAD Jul 2 06:55:44.160000 audit[3838]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3508 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:44.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663034653432616430623236623735666332323036653638653331 Jul 2 06:55:44.186153 containerd[1789]: time="2024-07-02T06:55:44.185958175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 06:55:44.258905 containerd[1789]: time="2024-07-02T06:55:44.258846550Z" level=info msg="StartContainer for \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\" returns successfully" Jul 2 06:55:44.918221 kubelet[3113]: E0702 06:55:44.918105 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j88n9" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" Jul 2 06:55:45.220499 containerd[1789]: time="2024-07-02T06:55:45.220236402Z" level=info msg="StopContainer for \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\" with timeout 300 (s)" Jul 2 06:55:45.261737 containerd[1789]: time="2024-07-02T06:55:45.261685589Z" level=info msg="Stop container \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\" with signal terminated" Jul 2 06:55:45.273088 kubelet[3113]: I0702 06:55:45.272401 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bd9b9fb97-849sz" podStartSLOduration=2.231851229 podStartE2EDuration="7.272377124s" podCreationTimestamp="2024-07-02 06:55:38 +0000 UTC" firstStartedPulling="2024-07-02 06:55:38.891784277 +0000 UTC m=+21.182656105" lastFinishedPulling="2024-07-02 06:55:43.93231017 +0000 UTC m=+26.223182000" observedRunningTime="2024-07-02 06:55:45.235593121 +0000 UTC m=+27.526464969" watchObservedRunningTime="2024-07-02 06:55:45.272377124 +0000 UTC m=+27.563248975" Jul 2 06:55:45.295060 systemd[1]: cri-containerd-12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86.scope: Deactivated successfully. Jul 2 06:55:45.293000 audit: BPF prog-id=132 op=UNLOAD Jul 2 06:55:45.300000 audit: BPF prog-id=135 op=UNLOAD Jul 2 06:55:45.303000 audit[3880]: NETFILTER_CFG table=filter:97 family=2 entries=15 op=nft_register_rule pid=3880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:45.303000 audit[3880]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe605943b0 a2=0 a3=7ffe6059439c items=0 ppid=3284 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:45.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:45.306000 audit[3880]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=3880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:45.306000 audit[3880]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe605943b0 a2=0 a3=7ffe6059439c items=0 ppid=3284 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:45.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:45.345529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86-rootfs.mount: Deactivated successfully. Jul 2 06:55:45.450286 containerd[1789]: time="2024-07-02T06:55:45.450201335Z" level=info msg="shim disconnected" id=12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86 namespace=k8s.io Jul 2 06:55:45.451173 containerd[1789]: time="2024-07-02T06:55:45.451143309Z" level=warning msg="cleaning up after shim disconnected" id=12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86 namespace=k8s.io Jul 2 06:55:45.451537 containerd[1789]: time="2024-07-02T06:55:45.451511041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:45.512370 containerd[1789]: time="2024-07-02T06:55:45.512222585Z" level=info msg="StopContainer for \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\" returns successfully" Jul 2 06:55:45.513730 containerd[1789]: time="2024-07-02T06:55:45.513694083Z" level=info msg="StopPodSandbox for \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\"" Jul 2 06:55:45.513870 containerd[1789]: time="2024-07-02T06:55:45.513770315Z" level=info msg="Container to stop \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:55:45.522156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0-shm.mount: Deactivated successfully. Jul 2 06:55:45.543355 systemd[1]: cri-containerd-44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0.scope: Deactivated successfully. Jul 2 06:55:45.542000 audit: BPF prog-id=118 op=UNLOAD Jul 2 06:55:45.546000 audit: BPF prog-id=121 op=UNLOAD Jul 2 06:55:45.628534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0-rootfs.mount: Deactivated successfully. Jul 2 06:55:45.720154 containerd[1789]: time="2024-07-02T06:55:45.719997286Z" level=info msg="shim disconnected" id=44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0 namespace=k8s.io Jul 2 06:55:45.720529 containerd[1789]: time="2024-07-02T06:55:45.720239112Z" level=warning msg="cleaning up after shim disconnected" id=44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0 namespace=k8s.io Jul 2 06:55:45.720687 containerd[1789]: time="2024-07-02T06:55:45.720359601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:45.760344 containerd[1789]: time="2024-07-02T06:55:45.759716360Z" level=info msg="TearDown network for sandbox \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\" successfully" Jul 2 06:55:45.760344 containerd[1789]: time="2024-07-02T06:55:45.759776513Z" level=info msg="StopPodSandbox for \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\" returns successfully" Jul 2 06:55:45.800674 kubelet[3113]: I0702 06:55:45.800219 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef846a94-e752-4256-892b-4fc38b104ccc-tigera-ca-bundle\") pod \"ef846a94-e752-4256-892b-4fc38b104ccc\" (UID: \"ef846a94-e752-4256-892b-4fc38b104ccc\") " Jul 2 06:55:45.800674 kubelet[3113]: I0702 06:55:45.800329 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccr2d\" (UniqueName: \"kubernetes.io/projected/ef846a94-e752-4256-892b-4fc38b104ccc-kube-api-access-ccr2d\") pod \"ef846a94-e752-4256-892b-4fc38b104ccc\" (UID: \"ef846a94-e752-4256-892b-4fc38b104ccc\") " Jul 2 06:55:45.800674 kubelet[3113]: I0702 06:55:45.800369 3113 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ef846a94-e752-4256-892b-4fc38b104ccc-typha-certs\") pod \"ef846a94-e752-4256-892b-4fc38b104ccc\" (UID: \"ef846a94-e752-4256-892b-4fc38b104ccc\") " Jul 2 06:55:45.809147 systemd[1]: var-lib-kubelet-pods-ef846a94\x2de752\x2d4256\x2d892b\x2d4fc38b104ccc-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jul 2 06:55:45.813088 kubelet[3113]: I0702 06:55:45.813048 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef846a94-e752-4256-892b-4fc38b104ccc-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "ef846a94-e752-4256-892b-4fc38b104ccc" (UID: "ef846a94-e752-4256-892b-4fc38b104ccc"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 06:55:45.816930 systemd[1]: var-lib-kubelet-pods-ef846a94\x2de752\x2d4256\x2d892b\x2d4fc38b104ccc-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jul 2 06:55:45.824067 kubelet[3113]: I0702 06:55:45.823178 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef846a94-e752-4256-892b-4fc38b104ccc-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ef846a94-e752-4256-892b-4fc38b104ccc" (UID: "ef846a94-e752-4256-892b-4fc38b104ccc"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 06:55:45.824296 kubelet[3113]: I0702 06:55:45.823978 3113 topology_manager.go:215] "Topology Admit Handler" podUID="c344f826-6c7e-42ed-85f0-84c9cca63674" podNamespace="calico-system" podName="calico-typha-6b4b486bf8-q8n9s" Jul 2 06:55:45.825432 kubelet[3113]: E0702 06:55:45.824452 3113 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef846a94-e752-4256-892b-4fc38b104ccc" containerName="calico-typha" Jul 2 06:55:45.825649 kubelet[3113]: I0702 06:55:45.825625 3113 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef846a94-e752-4256-892b-4fc38b104ccc" containerName="calico-typha" Jul 2 06:55:45.833718 systemd[1]: Created slice kubepods-besteffort-podc344f826_6c7e_42ed_85f0_84c9cca63674.slice - libcontainer container kubepods-besteffort-podc344f826_6c7e_42ed_85f0_84c9cca63674.slice. Jul 2 06:55:45.843026 systemd[1]: var-lib-kubelet-pods-ef846a94\x2de752\x2d4256\x2d892b\x2d4fc38b104ccc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccr2d.mount: Deactivated successfully. Jul 2 06:55:45.845754 kubelet[3113]: I0702 06:55:45.845708 3113 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef846a94-e752-4256-892b-4fc38b104ccc-kube-api-access-ccr2d" (OuterVolumeSpecName: "kube-api-access-ccr2d") pod "ef846a94-e752-4256-892b-4fc38b104ccc" (UID: "ef846a94-e752-4256-892b-4fc38b104ccc"). InnerVolumeSpecName "kube-api-access-ccr2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 06:55:45.850000 audit[3948]: NETFILTER_CFG table=filter:99 family=2 entries=15 op=nft_register_rule pid=3948 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:45.850000 audit[3948]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe2ff61e40 a2=0 a3=7ffe2ff61e2c items=0 ppid=3284 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:45.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:45.852000 audit[3948]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_unregister_chain pid=3948 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:45.852000 audit[3948]: SYSCALL arch=c000003e syscall=46 success=yes exit=2956 a0=3 a1=7ffe2ff61e40 a2=0 a3=0 items=0 ppid=3284 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:45.852000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:45.901600 kubelet[3113]: I0702 06:55:45.901430 3113 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ccr2d\" (UniqueName: \"kubernetes.io/projected/ef846a94-e752-4256-892b-4fc38b104ccc-kube-api-access-ccr2d\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:45.901868 kubelet[3113]: I0702 06:55:45.901689 3113 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ef846a94-e752-4256-892b-4fc38b104ccc-typha-certs\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:45.901868 kubelet[3113]: I0702 06:55:45.901704 3113 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef846a94-e752-4256-892b-4fc38b104ccc-tigera-ca-bundle\") on node \"ip-172-31-18-4\" DevicePath \"\"" Jul 2 06:55:45.933337 systemd[1]: Removed slice kubepods-besteffort-podef846a94_e752_4256_892b_4fc38b104ccc.slice - libcontainer container kubepods-besteffort-podef846a94_e752_4256_892b_4fc38b104ccc.slice. Jul 2 06:55:46.006136 kubelet[3113]: I0702 06:55:46.006072 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c344f826-6c7e-42ed-85f0-84c9cca63674-typha-certs\") pod \"calico-typha-6b4b486bf8-q8n9s\" (UID: \"c344f826-6c7e-42ed-85f0-84c9cca63674\") " pod="calico-system/calico-typha-6b4b486bf8-q8n9s" Jul 2 06:55:46.006478 kubelet[3113]: I0702 06:55:46.006184 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c344f826-6c7e-42ed-85f0-84c9cca63674-tigera-ca-bundle\") pod \"calico-typha-6b4b486bf8-q8n9s\" (UID: \"c344f826-6c7e-42ed-85f0-84c9cca63674\") " pod="calico-system/calico-typha-6b4b486bf8-q8n9s" Jul 2 06:55:46.006478 kubelet[3113]: I0702 06:55:46.006456 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6tmp\" (UniqueName: \"kubernetes.io/projected/c344f826-6c7e-42ed-85f0-84c9cca63674-kube-api-access-d6tmp\") pod \"calico-typha-6b4b486bf8-q8n9s\" (UID: \"c344f826-6c7e-42ed-85f0-84c9cca63674\") " pod="calico-system/calico-typha-6b4b486bf8-q8n9s" Jul 2 06:55:46.199143 kubelet[3113]: I0702 06:55:46.198976 3113 scope.go:117] "RemoveContainer" containerID="12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86" Jul 2 06:55:46.205831 containerd[1789]: time="2024-07-02T06:55:46.205790515Z" level=info msg="RemoveContainer for \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\"" Jul 2 06:55:46.293312 containerd[1789]: time="2024-07-02T06:55:46.293256122Z" level=info msg="RemoveContainer for \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\" returns successfully" Jul 2 06:55:46.300633 kubelet[3113]: I0702 06:55:46.300594 3113 scope.go:117] "RemoveContainer" containerID="12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86" Jul 2 06:55:46.322426 containerd[1789]: time="2024-07-02T06:55:46.301915768Z" level=error msg="ContainerStatus for \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\": not found" Jul 2 06:55:46.323340 kubelet[3113]: E0702 06:55:46.323301 3113 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\": not found" containerID="12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86" Jul 2 06:55:46.323447 kubelet[3113]: I0702 06:55:46.323354 3113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86"} err="failed to get container status \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\": rpc error: code = NotFound desc = an error occurred when try to find container \"12f04e42ad0b26b75fc2206e68e311e400e02928dedde2e1155dd00d88a0fd86\": not found" Jul 2 06:55:46.440219 containerd[1789]: time="2024-07-02T06:55:46.440163458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b4b486bf8-q8n9s,Uid:c344f826-6c7e-42ed-85f0-84c9cca63674,Namespace:calico-system,Attempt:0,}" Jul 2 06:55:46.534057 containerd[1789]: time="2024-07-02T06:55:46.532774656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:46.534249 containerd[1789]: time="2024-07-02T06:55:46.533306099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:46.534249 containerd[1789]: time="2024-07-02T06:55:46.533334290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:46.534249 containerd[1789]: time="2024-07-02T06:55:46.533348652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:46.605752 systemd[1]: Started cri-containerd-f5766ea82863a08bd2d09b877c3bacb101569d7fe0ba4495990af1512a83cc62.scope - libcontainer container f5766ea82863a08bd2d09b877c3bacb101569d7fe0ba4495990af1512a83cc62. Jul 2 06:55:46.646000 audit: BPF prog-id=136 op=LOAD Jul 2 06:55:46.647000 audit: BPF prog-id=137 op=LOAD Jul 2 06:55:46.647000 audit[3974]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:46.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635373636656138323836336130386264326430396238373763336261 Jul 2 06:55:46.648000 audit: BPF prog-id=138 op=LOAD Jul 2 06:55:46.648000 audit[3974]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:46.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635373636656138323836336130386264326430396238373763336261 Jul 2 06:55:46.648000 audit: BPF prog-id=138 op=UNLOAD Jul 2 06:55:46.648000 audit: BPF prog-id=137 op=UNLOAD Jul 2 06:55:46.648000 audit: BPF prog-id=139 op=LOAD Jul 2 06:55:46.648000 audit[3974]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:46.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635373636656138323836336130386264326430396238373763336261 Jul 2 06:55:46.711264 containerd[1789]: time="2024-07-02T06:55:46.710167798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b4b486bf8-q8n9s,Uid:c344f826-6c7e-42ed-85f0-84c9cca63674,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5766ea82863a08bd2d09b877c3bacb101569d7fe0ba4495990af1512a83cc62\"" Jul 2 06:55:46.723963 containerd[1789]: time="2024-07-02T06:55:46.723918616Z" level=info msg="CreateContainer within sandbox \"f5766ea82863a08bd2d09b877c3bacb101569d7fe0ba4495990af1512a83cc62\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 06:55:46.751755 containerd[1789]: time="2024-07-02T06:55:46.750810080Z" level=info msg="CreateContainer within sandbox \"f5766ea82863a08bd2d09b877c3bacb101569d7fe0ba4495990af1512a83cc62\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f29968ad9217ee0181f69cb9fa90095b1ff6e471691bc37bf7519c914e77a711\"" Jul 2 06:55:46.753745 containerd[1789]: time="2024-07-02T06:55:46.753706625Z" level=info msg="StartContainer for \"f29968ad9217ee0181f69cb9fa90095b1ff6e471691bc37bf7519c914e77a711\"" Jul 2 06:55:46.904917 systemd[1]: Started cri-containerd-f29968ad9217ee0181f69cb9fa90095b1ff6e471691bc37bf7519c914e77a711.scope - libcontainer container f29968ad9217ee0181f69cb9fa90095b1ff6e471691bc37bf7519c914e77a711. Jul 2 06:55:46.919855 kubelet[3113]: E0702 06:55:46.918870 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j88n9" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" Jul 2 06:55:46.918000 audit[4019]: NETFILTER_CFG table=filter:101 family=2 entries=16 op=nft_register_rule pid=4019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:46.918000 audit[4019]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffeba404640 a2=0 a3=7ffeba40462c items=0 ppid=3284 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:46.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:46.920000 audit[4019]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=4019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:46.920000 audit[4019]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeba404640 a2=0 a3=0 items=0 ppid=3284 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:46.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:46.959000 audit: BPF prog-id=140 op=LOAD Jul 2 06:55:46.960000 audit: BPF prog-id=141 op=LOAD Jul 2 06:55:46.960000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3963 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:46.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393936386164393231376565303138316636396362396661393030 Jul 2 06:55:46.960000 audit: BPF prog-id=142 op=LOAD Jul 2 06:55:46.960000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3963 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:46.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393936386164393231376565303138316636396362396661393030 Jul 2 06:55:46.960000 audit: BPF prog-id=142 op=UNLOAD Jul 2 06:55:46.960000 audit: BPF prog-id=141 op=UNLOAD Jul 2 06:55:46.960000 audit: BPF prog-id=143 op=LOAD Jul 2 06:55:46.960000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3963 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:46.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393936386164393231376565303138316636396362396661393030 Jul 2 06:55:47.103104 containerd[1789]: time="2024-07-02T06:55:47.103053242Z" level=info msg="StartContainer for \"f29968ad9217ee0181f69cb9fa90095b1ff6e471691bc37bf7519c914e77a711\" returns successfully" Jul 2 06:55:47.922168 kubelet[3113]: I0702 06:55:47.922134 3113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef846a94-e752-4256-892b-4fc38b104ccc" path="/var/lib/kubelet/pods/ef846a94-e752-4256-892b-4fc38b104ccc/volumes" Jul 2 06:55:48.919655 kubelet[3113]: E0702 06:55:48.918057 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j88n9" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" Jul 2 06:55:49.180919 kubelet[3113]: I0702 06:55:49.177148 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 06:55:49.205044 kubelet[3113]: I0702 06:55:49.204980 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b4b486bf8-q8n9s" podStartSLOduration=10.20495614 podStartE2EDuration="10.20495614s" podCreationTimestamp="2024-07-02 06:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:55:47.2366992 +0000 UTC m=+29.527571047" watchObservedRunningTime="2024-07-02 06:55:49.20495614 +0000 UTC m=+31.495827991" Jul 2 06:55:49.553534 kernel: kauditd_printk_skb: 72 callbacks suppressed Jul 2 06:55:49.553683 kernel: audit: type=1325 audit(1719903349.550:535): table=filter:103 family=2 entries=15 op=nft_register_rule pid=4041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:49.550000 audit[4041]: NETFILTER_CFG table=filter:103 family=2 entries=15 op=nft_register_rule pid=4041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:49.550000 audit[4041]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd090462b0 a2=0 a3=7ffd0904629c items=0 ppid=3284 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:49.566707 kernel: audit: type=1300 audit(1719903349.550:535): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd090462b0 a2=0 a3=7ffd0904629c items=0 ppid=3284 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:49.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:49.552000 audit[4041]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=4041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:49.571732 kernel: audit: type=1327 audit(1719903349.550:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:49.571808 kernel: audit: type=1325 audit(1719903349.552:536): table=nat:104 family=2 entries=19 op=nft_register_chain pid=4041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:55:49.552000 audit[4041]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd090462b0 a2=0 a3=7ffd0904629c items=0 ppid=3284 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:49.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:49.580582 kernel: audit: type=1300 audit(1719903349.552:536): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd090462b0 a2=0 a3=7ffd0904629c items=0 ppid=3284 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:49.580688 kernel: audit: type=1327 audit(1719903349.552:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:55:49.842717 containerd[1789]: time="2024-07-02T06:55:49.842591493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:49.844050 containerd[1789]: time="2024-07-02T06:55:49.843990365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 06:55:49.872189 containerd[1789]: time="2024-07-02T06:55:49.846139466Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:49.872585 containerd[1789]: time="2024-07-02T06:55:49.853569507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.667492328s" Jul 2 06:55:49.872734 containerd[1789]: time="2024-07-02T06:55:49.872706798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 06:55:49.873411 containerd[1789]: time="2024-07-02T06:55:49.873346492Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:49.874410 containerd[1789]: time="2024-07-02T06:55:49.874381427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:49.878659 containerd[1789]: time="2024-07-02T06:55:49.878289235Z" level=info msg="CreateContainer within sandbox \"fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 06:55:49.899651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382092758.mount: Deactivated successfully. Jul 2 06:55:49.909644 containerd[1789]: time="2024-07-02T06:55:49.909590671Z" level=info msg="CreateContainer within sandbox \"fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"62907d86a261d009b7b407648ea7b8658685acf642983dd7b035b0927a6825e2\"" Jul 2 06:55:49.911881 containerd[1789]: time="2024-07-02T06:55:49.910663820Z" level=info msg="StartContainer for \"62907d86a261d009b7b407648ea7b8658685acf642983dd7b035b0927a6825e2\"" Jul 2 06:55:49.975062 systemd[1]: Started cri-containerd-62907d86a261d009b7b407648ea7b8658685acf642983dd7b035b0927a6825e2.scope - libcontainer container 62907d86a261d009b7b407648ea7b8658685acf642983dd7b035b0927a6825e2. Jul 2 06:55:50.000530 kernel: audit: type=1334 audit(1719903349.994:537): prog-id=144 op=LOAD Jul 2 06:55:50.000793 kernel: audit: type=1300 audit(1719903349.994:537): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3732 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:50.000866 kernel: audit: type=1327 audit(1719903349.994:537): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393037643836613236316430303962376234303736343865613762 Jul 2 06:55:49.994000 audit: BPF prog-id=144 op=LOAD Jul 2 06:55:49.994000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3732 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:49.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393037643836613236316430303962376234303736343865613762 Jul 2 06:55:50.003648 kernel: audit: type=1334 audit(1719903349.994:538): prog-id=145 op=LOAD Jul 2 06:55:49.994000 audit: BPF prog-id=145 op=LOAD Jul 2 06:55:49.994000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3732 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:49.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393037643836613236316430303962376234303736343865613762 Jul 2 06:55:49.994000 audit: BPF prog-id=145 op=UNLOAD Jul 2 06:55:49.994000 audit: BPF prog-id=144 op=UNLOAD Jul 2 06:55:49.994000 audit: BPF prog-id=146 op=LOAD Jul 2 06:55:49.994000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3732 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:49.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393037643836613236316430303962376234303736343865613762 Jul 2 06:55:50.025174 containerd[1789]: time="2024-07-02T06:55:50.025122720Z" level=info msg="StartContainer for \"62907d86a261d009b7b407648ea7b8658685acf642983dd7b035b0927a6825e2\" returns successfully" Jul 2 06:55:50.856980 systemd[1]: cri-containerd-62907d86a261d009b7b407648ea7b8658685acf642983dd7b035b0927a6825e2.scope: Deactivated successfully. Jul 2 06:55:50.859000 audit: BPF prog-id=146 op=UNLOAD Jul 2 06:55:50.919216 kubelet[3113]: I0702 06:55:50.917871 3113 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 06:55:50.932435 systemd[1]: Created slice kubepods-besteffort-poddd0fcf2c_1a69_4f59_9dc5_d51372ca28c8.slice - libcontainer container kubepods-besteffort-poddd0fcf2c_1a69_4f59_9dc5_d51372ca28c8.slice. Jul 2 06:55:50.939547 containerd[1789]: time="2024-07-02T06:55:50.936963137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j88n9,Uid:dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8,Namespace:calico-system,Attempt:0,}" Jul 2 06:55:50.956807 kubelet[3113]: I0702 06:55:50.956357 3113 topology_manager.go:215] "Topology Admit Handler" podUID="27063218-f415-4854-a94c-adda458ba699" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ndnxh" Jul 2 06:55:50.960127 kubelet[3113]: I0702 06:55:50.960089 3113 topology_manager.go:215] "Topology Admit Handler" podUID="c97b0cef-d13a-4897-9382-2bce2f41c748" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cbbzt" Jul 2 06:55:50.960207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62907d86a261d009b7b407648ea7b8658685acf642983dd7b035b0927a6825e2-rootfs.mount: Deactivated successfully. Jul 2 06:55:50.970444 kubelet[3113]: I0702 06:55:50.970408 3113 topology_manager.go:215] "Topology Admit Handler" podUID="192f159c-ecbb-42bd-9e06-890e0a3f42d5" podNamespace="calico-system" podName="calico-kube-controllers-66d8cc657c-jrltf" Jul 2 06:55:51.002314 kubelet[3113]: I0702 06:55:51.001747 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zrk4\" (UniqueName: \"kubernetes.io/projected/192f159c-ecbb-42bd-9e06-890e0a3f42d5-kube-api-access-7zrk4\") pod \"calico-kube-controllers-66d8cc657c-jrltf\" (UID: \"192f159c-ecbb-42bd-9e06-890e0a3f42d5\") " pod="calico-system/calico-kube-controllers-66d8cc657c-jrltf" Jul 2 06:55:51.002314 kubelet[3113]: I0702 06:55:51.001796 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27063218-f415-4854-a94c-adda458ba699-config-volume\") pod \"coredns-7db6d8ff4d-ndnxh\" (UID: \"27063218-f415-4854-a94c-adda458ba699\") " pod="kube-system/coredns-7db6d8ff4d-ndnxh" Jul 2 06:55:51.002314 kubelet[3113]: I0702 06:55:51.001824 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx76l\" (UniqueName: \"kubernetes.io/projected/27063218-f415-4854-a94c-adda458ba699-kube-api-access-lx76l\") pod \"coredns-7db6d8ff4d-ndnxh\" (UID: \"27063218-f415-4854-a94c-adda458ba699\") " pod="kube-system/coredns-7db6d8ff4d-ndnxh" Jul 2 06:55:51.002314 kubelet[3113]: I0702 06:55:51.001859 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c97b0cef-d13a-4897-9382-2bce2f41c748-config-volume\") pod \"coredns-7db6d8ff4d-cbbzt\" (UID: \"c97b0cef-d13a-4897-9382-2bce2f41c748\") " pod="kube-system/coredns-7db6d8ff4d-cbbzt" Jul 2 06:55:51.002314 kubelet[3113]: I0702 06:55:51.001887 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnln4\" (UniqueName: \"kubernetes.io/projected/c97b0cef-d13a-4897-9382-2bce2f41c748-kube-api-access-lnln4\") pod \"coredns-7db6d8ff4d-cbbzt\" (UID: \"c97b0cef-d13a-4897-9382-2bce2f41c748\") " pod="kube-system/coredns-7db6d8ff4d-cbbzt" Jul 2 06:55:51.002940 kubelet[3113]: I0702 06:55:51.001983 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/192f159c-ecbb-42bd-9e06-890e0a3f42d5-tigera-ca-bundle\") pod \"calico-kube-controllers-66d8cc657c-jrltf\" (UID: \"192f159c-ecbb-42bd-9e06-890e0a3f42d5\") " pod="calico-system/calico-kube-controllers-66d8cc657c-jrltf" Jul 2 06:55:51.052829 systemd[1]: Created slice kubepods-burstable-pod27063218_f415_4854_a94c_adda458ba699.slice - libcontainer container kubepods-burstable-pod27063218_f415_4854_a94c_adda458ba699.slice. Jul 2 06:55:51.072296 containerd[1789]: time="2024-07-02T06:55:51.072207941Z" level=info msg="shim disconnected" id=62907d86a261d009b7b407648ea7b8658685acf642983dd7b035b0927a6825e2 namespace=k8s.io Jul 2 06:55:51.072296 containerd[1789]: time="2024-07-02T06:55:51.072293081Z" level=warning msg="cleaning up after shim disconnected" id=62907d86a261d009b7b407648ea7b8658685acf642983dd7b035b0927a6825e2 namespace=k8s.io Jul 2 06:55:51.072296 containerd[1789]: time="2024-07-02T06:55:51.072305180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:51.088184 systemd[1]: Created slice kubepods-burstable-podc97b0cef_d13a_4897_9382_2bce2f41c748.slice - libcontainer container kubepods-burstable-podc97b0cef_d13a_4897_9382_2bce2f41c748.slice. Jul 2 06:55:51.108950 systemd[1]: Created slice kubepods-besteffort-pod192f159c_ecbb_42bd_9e06_890e0a3f42d5.slice - libcontainer container kubepods-besteffort-pod192f159c_ecbb_42bd_9e06_890e0a3f42d5.slice. Jul 2 06:55:51.278157 containerd[1789]: time="2024-07-02T06:55:51.277858717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 06:55:51.360925 containerd[1789]: time="2024-07-02T06:55:51.360731251Z" level=error msg="Failed to destroy network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.361418 containerd[1789]: time="2024-07-02T06:55:51.361372266Z" level=error msg="encountered an error cleaning up failed sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.361654 containerd[1789]: time="2024-07-02T06:55:51.361445620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j88n9,Uid:dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.361910 kubelet[3113]: E0702 06:55:51.361871 3113 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.361998 kubelet[3113]: E0702 06:55:51.361957 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j88n9" Jul 2 06:55:51.362055 kubelet[3113]: E0702 06:55:51.361987 3113 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j88n9" Jul 2 06:55:51.362114 kubelet[3113]: E0702 06:55:51.362062 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j88n9_calico-system(dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j88n9_calico-system(dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j88n9" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" Jul 2 06:55:51.384753 containerd[1789]: time="2024-07-02T06:55:51.384696568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndnxh,Uid:27063218-f415-4854-a94c-adda458ba699,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:51.404200 containerd[1789]: time="2024-07-02T06:55:51.404153745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cbbzt,Uid:c97b0cef-d13a-4897-9382-2bce2f41c748,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:51.427358 containerd[1789]: time="2024-07-02T06:55:51.427306964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66d8cc657c-jrltf,Uid:192f159c-ecbb-42bd-9e06-890e0a3f42d5,Namespace:calico-system,Attempt:0,}" Jul 2 06:55:51.561384 containerd[1789]: time="2024-07-02T06:55:51.561287741Z" level=error msg="Failed to destroy network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.562888 containerd[1789]: time="2024-07-02T06:55:51.562775130Z" level=error msg="encountered an error cleaning up failed sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.563241 containerd[1789]: time="2024-07-02T06:55:51.563159542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndnxh,Uid:27063218-f415-4854-a94c-adda458ba699,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.564017 kubelet[3113]: E0702 06:55:51.563968 3113 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.564177 kubelet[3113]: E0702 06:55:51.564080 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndnxh" Jul 2 06:55:51.564177 kubelet[3113]: E0702 06:55:51.564144 3113 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndnxh" Jul 2 06:55:51.564348 kubelet[3113]: E0702 06:55:51.564237 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ndnxh_kube-system(27063218-f415-4854-a94c-adda458ba699)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ndnxh_kube-system(27063218-f415-4854-a94c-adda458ba699)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ndnxh" podUID="27063218-f415-4854-a94c-adda458ba699" Jul 2 06:55:51.651694 containerd[1789]: time="2024-07-02T06:55:51.650043938Z" level=error msg="Failed to destroy network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.651694 containerd[1789]: time="2024-07-02T06:55:51.650590653Z" level=error msg="encountered an error cleaning up failed sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.651694 containerd[1789]: time="2024-07-02T06:55:51.650671372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66d8cc657c-jrltf,Uid:192f159c-ecbb-42bd-9e06-890e0a3f42d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.651986 kubelet[3113]: E0702 06:55:51.651317 3113 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.651986 kubelet[3113]: E0702 06:55:51.651381 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66d8cc657c-jrltf" Jul 2 06:55:51.651986 kubelet[3113]: E0702 06:55:51.651412 3113 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66d8cc657c-jrltf" Jul 2 06:55:51.652129 kubelet[3113]: E0702 06:55:51.651462 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66d8cc657c-jrltf_calico-system(192f159c-ecbb-42bd-9e06-890e0a3f42d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66d8cc657c-jrltf_calico-system(192f159c-ecbb-42bd-9e06-890e0a3f42d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66d8cc657c-jrltf" podUID="192f159c-ecbb-42bd-9e06-890e0a3f42d5" Jul 2 06:55:51.657878 containerd[1789]: time="2024-07-02T06:55:51.657814438Z" level=error msg="Failed to destroy network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.658315 containerd[1789]: time="2024-07-02T06:55:51.658267560Z" level=error msg="encountered an error cleaning up failed sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.658426 containerd[1789]: time="2024-07-02T06:55:51.658345692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cbbzt,Uid:c97b0cef-d13a-4897-9382-2bce2f41c748,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.659093 kubelet[3113]: E0702 06:55:51.659049 3113 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:51.660009 kubelet[3113]: E0702 06:55:51.659156 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cbbzt" Jul 2 06:55:51.660009 kubelet[3113]: E0702 06:55:51.659221 3113 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cbbzt" Jul 2 06:55:51.660009 kubelet[3113]: E0702 06:55:51.659318 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cbbzt_kube-system(c97b0cef-d13a-4897-9382-2bce2f41c748)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cbbzt_kube-system(c97b0cef-d13a-4897-9382-2bce2f41c748)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cbbzt" podUID="c97b0cef-d13a-4897-9382-2bce2f41c748" Jul 2 06:55:51.961536 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493-shm.mount: Deactivated successfully. Jul 2 06:55:52.278227 kubelet[3113]: I0702 06:55:52.277161 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:55:52.279041 containerd[1789]: time="2024-07-02T06:55:52.279002655Z" level=info msg="StopPodSandbox for \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\"" Jul 2 06:55:52.305416 containerd[1789]: time="2024-07-02T06:55:52.302549467Z" level=info msg="Ensure that sandbox 187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493 in task-service has been cleanup successfully" Jul 2 06:55:52.316223 kubelet[3113]: I0702 06:55:52.313248 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:55:52.322436 containerd[1789]: time="2024-07-02T06:55:52.322386719Z" level=info msg="StopPodSandbox for \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\"" Jul 2 06:55:52.323287 containerd[1789]: time="2024-07-02T06:55:52.323239225Z" level=info msg="Ensure that sandbox 3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c in task-service has been cleanup successfully" Jul 2 06:55:52.349718 kubelet[3113]: I0702 06:55:52.349670 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:55:52.352105 containerd[1789]: time="2024-07-02T06:55:52.352021440Z" level=info msg="StopPodSandbox for \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\"" Jul 2 06:55:52.353392 containerd[1789]: time="2024-07-02T06:55:52.353361660Z" level=info msg="Ensure that sandbox bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078 in task-service has been cleanup successfully" Jul 2 06:55:52.363520 kubelet[3113]: I0702 06:55:52.363455 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:55:52.364642 containerd[1789]: time="2024-07-02T06:55:52.364463672Z" level=info msg="StopPodSandbox for \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\"" Jul 2 06:55:52.366888 containerd[1789]: time="2024-07-02T06:55:52.366840840Z" level=info msg="Ensure that sandbox 2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff in task-service has been cleanup successfully" Jul 2 06:55:52.430631 containerd[1789]: time="2024-07-02T06:55:52.430454155Z" level=error msg="StopPodSandbox for \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\" failed" error="failed to destroy network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:52.431228 kubelet[3113]: E0702 06:55:52.431067 3113 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:55:52.431228 kubelet[3113]: E0702 06:55:52.431128 3113 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493"} Jul 2 06:55:52.431228 kubelet[3113]: E0702 06:55:52.431174 3113 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:55:52.431228 kubelet[3113]: E0702 06:55:52.431205 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j88n9" podUID="dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8" Jul 2 06:55:52.497213 containerd[1789]: time="2024-07-02T06:55:52.497144918Z" level=error msg="StopPodSandbox for \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\" failed" error="failed to destroy network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:52.497483 kubelet[3113]: E0702 06:55:52.497438 3113 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:55:52.497628 kubelet[3113]: E0702 06:55:52.497521 3113 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff"} Jul 2 06:55:52.497628 kubelet[3113]: E0702 06:55:52.497564 3113 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c97b0cef-d13a-4897-9382-2bce2f41c748\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:55:52.497628 kubelet[3113]: E0702 06:55:52.497595 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c97b0cef-d13a-4897-9382-2bce2f41c748\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cbbzt" podUID="c97b0cef-d13a-4897-9382-2bce2f41c748" Jul 2 06:55:52.506616 containerd[1789]: time="2024-07-02T06:55:52.506530516Z" level=error msg="StopPodSandbox for \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\" failed" error="failed to destroy network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:52.507161 kubelet[3113]: E0702 06:55:52.507117 3113 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:55:52.507305 kubelet[3113]: E0702 06:55:52.507180 3113 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c"} Jul 2 06:55:52.507305 kubelet[3113]: E0702 06:55:52.507262 3113 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27063218-f415-4854-a94c-adda458ba699\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:55:52.507467 kubelet[3113]: E0702 06:55:52.507294 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27063218-f415-4854-a94c-adda458ba699\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ndnxh" podUID="27063218-f415-4854-a94c-adda458ba699" Jul 2 06:55:52.534464 containerd[1789]: time="2024-07-02T06:55:52.532932805Z" level=error msg="StopPodSandbox for \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\" failed" error="failed to destroy network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:55:52.535974 kubelet[3113]: E0702 06:55:52.535261 3113 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:55:52.535974 kubelet[3113]: E0702 06:55:52.535357 3113 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078"} Jul 2 06:55:52.535974 kubelet[3113]: E0702 06:55:52.535423 3113 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"192f159c-ecbb-42bd-9e06-890e0a3f42d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:55:52.535974 kubelet[3113]: E0702 06:55:52.535455 3113 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"192f159c-ecbb-42bd-9e06-890e0a3f42d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66d8cc657c-jrltf" podUID="192f159c-ecbb-42bd-9e06-890e0a3f42d5" Jul 2 06:55:58.611056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266976098.mount: Deactivated successfully. Jul 2 06:55:58.698541 containerd[1789]: time="2024-07-02T06:55:58.697333894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 06:55:58.702774 containerd[1789]: time="2024-07-02T06:55:58.702723756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:58.715696 containerd[1789]: time="2024-07-02T06:55:58.715636052Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:58.717844 containerd[1789]: time="2024-07-02T06:55:58.717804502Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:58.723529 containerd[1789]: time="2024-07-02T06:55:58.722514768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:58.724779 containerd[1789]: time="2024-07-02T06:55:58.724635794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.44381562s" Jul 2 06:55:58.724924 containerd[1789]: time="2024-07-02T06:55:58.724783528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 06:55:58.748024 containerd[1789]: time="2024-07-02T06:55:58.747959195Z" level=info msg="CreateContainer within sandbox \"fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 06:55:58.785959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206815928.mount: Deactivated successfully. Jul 2 06:55:58.814130 containerd[1789]: time="2024-07-02T06:55:58.814071196Z" level=info msg="CreateContainer within sandbox \"fb514d2ead0aca7b481c570c1703d0962848bd3f84b9949280c8f3e3f4d30f17\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31\"" Jul 2 06:55:58.816593 containerd[1789]: time="2024-07-02T06:55:58.814803703Z" level=info msg="StartContainer for \"ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31\"" Jul 2 06:55:58.919789 systemd[1]: Started cri-containerd-ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31.scope - libcontainer container ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31. Jul 2 06:55:58.952000 audit: BPF prog-id=147 op=LOAD Jul 2 06:55:58.956821 kernel: kauditd_printk_skb: 8 callbacks suppressed Jul 2 06:55:58.957056 kernel: audit: type=1334 audit(1719903358.952:543): prog-id=147 op=LOAD Jul 2 06:55:58.957167 kernel: audit: type=1300 audit(1719903358.952:543): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3732 pid=4340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:58.952000 audit[4340]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3732 pid=4340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:58.976076 kernel: audit: type=1327 audit(1719903358.952:543): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164303338623430646665666461646364306666393038353538323663 Jul 2 06:55:58.976156 kernel: audit: type=1334 audit(1719903358.952:544): prog-id=148 op=LOAD Jul 2 06:55:58.976180 kernel: audit: type=1300 audit(1719903358.952:544): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3732 pid=4340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:58.976202 kernel: audit: type=1327 audit(1719903358.952:544): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164303338623430646665666461646364306666393038353538323663 Jul 2 06:55:58.979351 kernel: audit: type=1334 audit(1719903358.952:545): prog-id=148 op=UNLOAD Jul 2 06:55:58.979425 kernel: audit: type=1334 audit(1719903358.952:546): prog-id=147 op=UNLOAD Jul 2 06:55:58.979458 kernel: audit: type=1334 audit(1719903358.952:547): prog-id=149 op=LOAD Jul 2 06:55:58.979517 kernel: audit: type=1300 audit(1719903358.952:547): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3732 pid=4340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:58.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164303338623430646665666461646364306666393038353538323663 Jul 2 06:55:58.952000 audit: BPF prog-id=148 op=LOAD Jul 2 06:55:58.952000 audit[4340]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3732 pid=4340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:58.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164303338623430646665666461646364306666393038353538323663 Jul 2 06:55:58.952000 audit: BPF prog-id=148 op=UNLOAD Jul 2 06:55:58.952000 audit: BPF prog-id=147 op=UNLOAD Jul 2 06:55:58.952000 audit: BPF prog-id=149 op=LOAD Jul 2 06:55:58.952000 audit[4340]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3732 pid=4340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:58.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164303338623430646665666461646364306666393038353538323663 Jul 2 06:55:59.037034 containerd[1789]: time="2024-07-02T06:55:59.036976611Z" level=info msg="StartContainer for \"ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31\" returns successfully" Jul 2 06:55:59.204256 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 06:55:59.204428 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 06:55:59.487662 kubelet[3113]: I0702 06:55:59.486092 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jdqsw" podStartSLOduration=2.944931208 podStartE2EDuration="17.486065748s" podCreationTimestamp="2024-07-02 06:55:42 +0000 UTC" firstStartedPulling="2024-07-02 06:55:44.184949671 +0000 UTC m=+26.475821508" lastFinishedPulling="2024-07-02 06:55:58.72608421 +0000 UTC m=+41.016956048" observedRunningTime="2024-07-02 06:55:59.471938187 +0000 UTC m=+41.762810037" watchObservedRunningTime="2024-07-02 06:55:59.486065748 +0000 UTC m=+41.776937621" Jul 2 06:56:00.442912 kubelet[3113]: I0702 06:56:00.442874 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 06:56:00.795000 audit[4433]: AVC avc: denied { write } for pid=4433 comm="tee" name="fd" dev="proc" ino=26329 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:56:00.795000 audit[4433]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec8c0ea1d a2=241 a3=1b6 items=1 ppid=4409 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:00.795000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 2 06:56:00.795000 audit: PATH item=0 name="/dev/fd/63" inode=27217 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:56:00.795000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:56:00.816000 audit[4444]: AVC avc: denied { write } for pid=4444 comm="tee" name="fd" dev="proc" ino=27246 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:56:00.816000 audit[4444]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff20fd7a2e a2=241 a3=1b6 items=1 ppid=4413 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:00.816000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 2 06:56:00.816000 audit: PATH item=0 name="/dev/fd/63" inode=26324 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:56:00.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:56:00.816000 audit[4438]: AVC avc: denied { write } for pid=4438 comm="tee" name="fd" dev="proc" ino=27250 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:56:00.816000 audit[4438]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd618d0a2c a2=241 a3=1b6 items=1 ppid=4407 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:00.816000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 2 06:56:00.816000 audit: PATH item=0 name="/dev/fd/63" inode=27222 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:56:00.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:56:00.844000 audit[4454]: AVC avc: denied { write } for pid=4454 comm="tee" name="fd" dev="proc" ino=27262 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:56:00.844000 audit[4454]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffccc7efa1c a2=241 a3=1b6 items=1 ppid=4411 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:00.844000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 2 06:56:00.844000 audit: PATH item=0 name="/dev/fd/63" inode=27243 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:56:00.844000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:56:00.940000 audit[4480]: AVC avc: denied { write } for pid=4480 comm="tee" name="fd" dev="proc" ino=27277 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:56:00.940000 audit[4480]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff70a7a2d a2=241 a3=1b6 items=1 ppid=4421 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:00.940000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 2 06:56:00.940000 audit: PATH item=0 name="/dev/fd/63" inode=27269 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:56:00.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:56:00.941000 audit[4475]: AVC avc: denied { write } for pid=4475 comm="tee" name="fd" dev="proc" ino=26337 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:56:00.941000 audit[4475]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcae6f8a2c a2=241 a3=1b6 items=1 ppid=4416 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:00.941000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 2 06:56:00.941000 audit: PATH item=0 name="/dev/fd/63" inode=27266 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:56:00.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:56:00.948000 audit[4485]: AVC avc: denied { write } for pid=4485 comm="tee" name="fd" dev="proc" ino=27281 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:56:00.948000 audit[4485]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4c7c3a2c a2=241 a3=1b6 items=1 ppid=4406 pid=4485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:00.948000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 2 06:56:00.948000 audit: PATH item=0 name="/dev/fd/63" inode=27274 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:56:00.948000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:56:01.546720 (udev-worker)[4374]: Network interface NamePolicy= disabled on kernel command line. Jul 2 06:56:01.566687 systemd-networkd[1514]: vxlan.calico: Link UP Jul 2 06:56:01.566696 systemd-networkd[1514]: vxlan.calico: Gained carrier Jul 2 06:56:01.699000 audit: BPF prog-id=150 op=LOAD Jul 2 06:56:01.699000 audit[4547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff77828050 a2=70 a3=7ff64b650000 items=0 ppid=4412 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:01.699000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:56:01.700000 audit: BPF prog-id=150 op=UNLOAD Jul 2 06:56:01.700000 audit: BPF prog-id=151 op=LOAD Jul 2 06:56:01.700000 audit[4547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff77828050 a2=70 a3=6f items=0 ppid=4412 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:01.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:56:01.700000 audit: BPF prog-id=151 op=UNLOAD Jul 2 06:56:01.700000 audit: BPF prog-id=152 op=LOAD Jul 2 06:56:01.700000 audit[4547]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff77827fe0 a2=70 a3=7fff77828050 items=0 ppid=4412 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:01.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:56:01.701000 audit: BPF prog-id=152 op=UNLOAD Jul 2 06:56:01.703000 audit: BPF prog-id=153 op=LOAD Jul 2 06:56:01.703000 audit[4547]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff77828010 a2=70 a3=0 items=0 ppid=4412 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:01.703000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:56:01.707662 (udev-worker)[4546]: Network interface NamePolicy= disabled on kernel command line. Jul 2 06:56:01.707752 (udev-worker)[4548]: Network interface NamePolicy= disabled on kernel command line. Jul 2 06:56:01.725000 audit: BPF prog-id=153 op=UNLOAD Jul 2 06:56:01.869000 audit[4576]: NETFILTER_CFG table=nat:105 family=2 entries=15 op=nft_register_chain pid=4576 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:01.869000 audit[4576]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe7eb99010 a2=0 a3=7ffe7eb98ffc items=0 ppid=4412 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:01.869000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:01.872000 audit[4578]: NETFILTER_CFG table=mangle:106 family=2 entries=16 op=nft_register_chain pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:01.872000 audit[4578]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffc745f640 a2=0 a3=7fffc745f62c items=0 ppid=4412 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:01.872000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:01.873000 audit[4577]: NETFILTER_CFG table=raw:107 family=2 entries=19 op=nft_register_chain pid=4577 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:01.873000 audit[4577]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffc6b5ca620 a2=0 a3=7ffc6b5ca60c items=0 ppid=4412 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:01.873000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:01.878000 audit[4581]: NETFILTER_CFG table=filter:108 family=2 entries=39 op=nft_register_chain pid=4581 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:01.878000 audit[4581]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffecc7b4030 a2=0 a3=7ffecc7b401c items=0 ppid=4412 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:01.878000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:02.512505 kubelet[3113]: I0702 06:56:02.512447 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 06:56:02.734991 systemd[1]: run-containerd-runc-k8s.io-ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31-runc.xJ7DEy.mount: Deactivated successfully. Jul 2 06:56:02.858438 systemd[1]: run-containerd-runc-k8s.io-ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31-runc.wleBIv.mount: Deactivated successfully. Jul 2 06:56:03.161653 systemd-networkd[1514]: vxlan.calico: Gained IPv6LL Jul 2 06:56:03.919098 containerd[1789]: time="2024-07-02T06:56:03.918754728Z" level=info msg="StopPodSandbox for \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\"" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.065 [INFO][4651] k8s.go 608: Cleaning up netns ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.066 [INFO][4651] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" iface="eth0" netns="/var/run/netns/cni-fef06c5f-b2eb-26c6-2c96-9cbb7a61fb96" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.066 [INFO][4651] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" iface="eth0" netns="/var/run/netns/cni-fef06c5f-b2eb-26c6-2c96-9cbb7a61fb96" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.068 [INFO][4651] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" iface="eth0" netns="/var/run/netns/cni-fef06c5f-b2eb-26c6-2c96-9cbb7a61fb96" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.068 [INFO][4651] k8s.go 615: Releasing IP address(es) ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.068 [INFO][4651] utils.go 188: Calico CNI releasing IP address ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.296 [INFO][4657] ipam_plugin.go 411: Releasing address using handleID ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" HandleID="k8s-pod-network.187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.300 [INFO][4657] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.300 [INFO][4657] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.314 [WARNING][4657] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" HandleID="k8s-pod-network.187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.314 [INFO][4657] ipam_plugin.go 439: Releasing address using workloadID ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" HandleID="k8s-pod-network.187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.316 [INFO][4657] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:04.326592 containerd[1789]: 2024-07-02 06:56:04.318 [INFO][4651] k8s.go 621: Teardown processing complete. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:04.326592 containerd[1789]: time="2024-07-02T06:56:04.322064539Z" level=info msg="TearDown network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\" successfully" Jul 2 06:56:04.326592 containerd[1789]: time="2024-07-02T06:56:04.322222569Z" level=info msg="StopPodSandbox for \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\" returns successfully" Jul 2 06:56:04.326592 containerd[1789]: time="2024-07-02T06:56:04.324885101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j88n9,Uid:dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8,Namespace:calico-system,Attempt:1,}" Jul 2 06:56:04.326970 systemd[1]: run-netns-cni\x2dfef06c5f\x2db2eb\x2d26c6\x2d2c96\x2d9cbb7a61fb96.mount: Deactivated successfully. Jul 2 06:56:04.537934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 06:56:04.538075 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid65d9c2bb33: link becomes ready Jul 2 06:56:04.539068 systemd-networkd[1514]: calid65d9c2bb33: Link UP Jul 2 06:56:04.539301 systemd-networkd[1514]: calid65d9c2bb33: Gained carrier Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.420 [INFO][4664] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0 csi-node-driver- calico-system dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8 777 0 2024-07-02 06:55:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-18-4 csi-node-driver-j88n9 eth0 default [] [] [kns.calico-system ksa.calico-system.default] calid65d9c2bb33 [] []}} ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Namespace="calico-system" Pod="csi-node-driver-j88n9" WorkloadEndpoint="ip--172--31--18--4-k8s-csi--node--driver--j88n9-" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.420 [INFO][4664] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Namespace="calico-system" Pod="csi-node-driver-j88n9" WorkloadEndpoint="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.464 [INFO][4676] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" HandleID="k8s-pod-network.39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.483 [INFO][4676] ipam_plugin.go 264: Auto assigning IP ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" HandleID="k8s-pod-network.39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318410), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-4", "pod":"csi-node-driver-j88n9", "timestamp":"2024-07-02 06:56:04.464026912 +0000 UTC"}, Hostname:"ip-172-31-18-4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.483 [INFO][4676] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.483 [INFO][4676] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.483 [INFO][4676] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-4' Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.485 [INFO][4676] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" host="ip-172-31-18-4" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.494 [INFO][4676] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-4" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.506 [INFO][4676] ipam.go 489: Trying affinity for 192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.509 [INFO][4676] ipam.go 155: Attempting to load block cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.512 [INFO][4676] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.512 [INFO][4676] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" host="ip-172-31-18-4" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.515 [INFO][4676] ipam.go 1685: Creating new handle: k8s-pod-network.39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163 Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.520 [INFO][4676] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" host="ip-172-31-18-4" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.530 [INFO][4676] ipam.go 1216: Successfully claimed IPs: [192.168.13.129/26] block=192.168.13.128/26 handle="k8s-pod-network.39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" host="ip-172-31-18-4" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.530 [INFO][4676] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.129/26] handle="k8s-pod-network.39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" host="ip-172-31-18-4" Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.530 [INFO][4676] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:04.594985 containerd[1789]: 2024-07-02 06:56:04.530 [INFO][4676] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.129/26] IPv6=[] ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" HandleID="k8s-pod-network.39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.596008 containerd[1789]: 2024-07-02 06:56:04.534 [INFO][4664] k8s.go 386: Populated endpoint ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Namespace="calico-system" Pod="csi-node-driver-j88n9" WorkloadEndpoint="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"", Pod:"csi-node-driver-j88n9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid65d9c2bb33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:04.596008 containerd[1789]: 2024-07-02 06:56:04.534 [INFO][4664] k8s.go 387: Calico CNI using IPs: [192.168.13.129/32] ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Namespace="calico-system" Pod="csi-node-driver-j88n9" WorkloadEndpoint="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.596008 containerd[1789]: 2024-07-02 06:56:04.534 [INFO][4664] dataplane_linux.go 68: Setting the host side veth name to calid65d9c2bb33 ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Namespace="calico-system" Pod="csi-node-driver-j88n9" WorkloadEndpoint="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.596008 containerd[1789]: 2024-07-02 06:56:04.538 [INFO][4664] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Namespace="calico-system" Pod="csi-node-driver-j88n9" WorkloadEndpoint="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.596008 containerd[1789]: 2024-07-02 06:56:04.538 [INFO][4664] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Namespace="calico-system" Pod="csi-node-driver-j88n9" WorkloadEndpoint="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163", Pod:"csi-node-driver-j88n9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid65d9c2bb33", MAC:"f6:9d:45:18:07:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:04.596008 containerd[1789]: 2024-07-02 06:56:04.584 [INFO][4664] k8s.go 500: Wrote updated endpoint to datastore ContainerID="39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163" Namespace="calico-system" Pod="csi-node-driver-j88n9" WorkloadEndpoint="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:04.624000 audit[4694]: NETFILTER_CFG table=filter:109 family=2 entries=34 op=nft_register_chain pid=4694 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:04.632289 kernel: kauditd_printk_skb: 64 callbacks suppressed Jul 2 06:56:04.632438 kernel: audit: type=1325 audit(1719903364.624:567): table=filter:109 family=2 entries=34 op=nft_register_chain pid=4694 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:04.632643 kernel: audit: type=1300 audit(1719903364.624:567): arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffc3a48c600 a2=0 a3=7ffc3a48c5ec items=0 ppid=4412 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:04.624000 audit[4694]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffc3a48c600 a2=0 a3=7ffc3a48c5ec items=0 ppid=4412 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:04.624000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:04.635650 kernel: audit: type=1327 audit(1719903364.624:567): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:04.665046 containerd[1789]: time="2024-07-02T06:56:04.664878920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:04.665046 containerd[1789]: time="2024-07-02T06:56:04.664957898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:04.665686 containerd[1789]: time="2024-07-02T06:56:04.665032045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:04.665686 containerd[1789]: time="2024-07-02T06:56:04.665054731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:04.706876 systemd[1]: Started cri-containerd-39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163.scope - libcontainer container 39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163. Jul 2 06:56:04.718000 audit: BPF prog-id=154 op=LOAD Jul 2 06:56:04.720635 kernel: audit: type=1334 audit(1719903364.718:568): prog-id=154 op=LOAD Jul 2 06:56:04.724639 kernel: audit: type=1334 audit(1719903364.719:569): prog-id=155 op=LOAD Jul 2 06:56:04.724871 kernel: audit: type=1300 audit(1719903364.719:569): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4703 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:04.719000 audit: BPF prog-id=155 op=LOAD Jul 2 06:56:04.719000 audit[4713]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4703 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:04.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333538616465633836323539353337616634393539373165323235 Jul 2 06:56:04.728390 kernel: audit: type=1327 audit(1719903364.719:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333538616465633836323539353337616634393539373165323235 Jul 2 06:56:04.728433 kernel: audit: type=1334 audit(1719903364.719:570): prog-id=156 op=LOAD Jul 2 06:56:04.719000 audit: BPF prog-id=156 op=LOAD Jul 2 06:56:04.719000 audit[4713]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4703 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:04.731650 kernel: audit: type=1300 audit(1719903364.719:570): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4703 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:04.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333538616465633836323539353337616634393539373165323235 Jul 2 06:56:04.735569 kernel: audit: type=1327 audit(1719903364.719:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333538616465633836323539353337616634393539373165323235 Jul 2 06:56:04.719000 audit: BPF prog-id=156 op=UNLOAD Jul 2 06:56:04.719000 audit: BPF prog-id=155 op=UNLOAD Jul 2 06:56:04.719000 audit: BPF prog-id=157 op=LOAD Jul 2 06:56:04.719000 audit[4713]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4703 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:04.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333538616465633836323539353337616634393539373165323235 Jul 2 06:56:04.754479 containerd[1789]: time="2024-07-02T06:56:04.754442917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j88n9,Uid:dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163\"" Jul 2 06:56:04.757901 containerd[1789]: time="2024-07-02T06:56:04.757861381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 06:56:04.920338 containerd[1789]: time="2024-07-02T06:56:04.918253767Z" level=info msg="StopPodSandbox for \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\"" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:04.977 [INFO][4750] k8s.go 608: Cleaning up netns ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:04.977 [INFO][4750] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" iface="eth0" netns="/var/run/netns/cni-9e062ed7-97ff-be28-b514-503ea6e17c57" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:04.978 [INFO][4750] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" iface="eth0" netns="/var/run/netns/cni-9e062ed7-97ff-be28-b514-503ea6e17c57" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:04.978 [INFO][4750] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" iface="eth0" netns="/var/run/netns/cni-9e062ed7-97ff-be28-b514-503ea6e17c57" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:04.978 [INFO][4750] k8s.go 615: Releasing IP address(es) ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:04.978 [INFO][4750] utils.go 188: Calico CNI releasing IP address ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:05.056 [INFO][4756] ipam_plugin.go 411: Releasing address using handleID ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" HandleID="k8s-pod-network.3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:05.056 [INFO][4756] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:05.056 [INFO][4756] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:05.064 [WARNING][4756] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" HandleID="k8s-pod-network.3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:05.064 [INFO][4756] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" HandleID="k8s-pod-network.3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:05.066 [INFO][4756] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:05.069784 containerd[1789]: 2024-07-02 06:56:05.068 [INFO][4750] k8s.go 621: Teardown processing complete. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:05.070695 containerd[1789]: time="2024-07-02T06:56:05.070045256Z" level=info msg="TearDown network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\" successfully" Jul 2 06:56:05.070695 containerd[1789]: time="2024-07-02T06:56:05.070152106Z" level=info msg="StopPodSandbox for \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\" returns successfully" Jul 2 06:56:05.071203 containerd[1789]: time="2024-07-02T06:56:05.071166866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndnxh,Uid:27063218-f415-4854-a94c-adda458ba699,Namespace:kube-system,Attempt:1,}" Jul 2 06:56:05.327590 systemd[1]: run-netns-cni\x2d9e062ed7\x2d97ff\x2dbe28\x2db514\x2d503ea6e17c57.mount: Deactivated successfully. Jul 2 06:56:05.351830 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6e5b627e20b: link becomes ready Jul 2 06:56:05.349394 systemd-networkd[1514]: cali6e5b627e20b: Link UP Jul 2 06:56:05.350886 systemd-networkd[1514]: cali6e5b627e20b: Gained carrier Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.203 [INFO][4767] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0 coredns-7db6d8ff4d- kube-system 27063218-f415-4854-a94c-adda458ba699 783 0 2024-07-02 06:55:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-4 coredns-7db6d8ff4d-ndnxh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6e5b627e20b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndnxh" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.203 [INFO][4767] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndnxh" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.248 [INFO][4774] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" HandleID="k8s-pod-network.2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.257 [INFO][4774] ipam_plugin.go 264: Auto assigning IP ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" HandleID="k8s-pod-network.2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fd840), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-4", "pod":"coredns-7db6d8ff4d-ndnxh", "timestamp":"2024-07-02 06:56:05.24805476 +0000 UTC"}, Hostname:"ip-172-31-18-4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.257 [INFO][4774] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.257 [INFO][4774] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.257 [INFO][4774] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-4' Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.259 [INFO][4774] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" host="ip-172-31-18-4" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.264 [INFO][4774] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-4" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.269 [INFO][4774] ipam.go 489: Trying affinity for 192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.272 [INFO][4774] ipam.go 155: Attempting to load block cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.292 [INFO][4774] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.292 [INFO][4774] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" host="ip-172-31-18-4" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.300 [INFO][4774] ipam.go 1685: Creating new handle: k8s-pod-network.2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.315 [INFO][4774] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" host="ip-172-31-18-4" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.335 [INFO][4774] ipam.go 1216: Successfully claimed IPs: [192.168.13.130/26] block=192.168.13.128/26 handle="k8s-pod-network.2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" host="ip-172-31-18-4" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.335 [INFO][4774] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.130/26] handle="k8s-pod-network.2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" host="ip-172-31-18-4" Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.335 [INFO][4774] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:05.390042 containerd[1789]: 2024-07-02 06:56:05.335 [INFO][4774] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.130/26] IPv6=[] ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" HandleID="k8s-pod-network.2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.391727 containerd[1789]: 2024-07-02 06:56:05.343 [INFO][4767] k8s.go 386: Populated endpoint ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndnxh" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"27063218-f415-4854-a94c-adda458ba699", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"", Pod:"coredns-7db6d8ff4d-ndnxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e5b627e20b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:05.391727 containerd[1789]: 2024-07-02 06:56:05.343 [INFO][4767] k8s.go 387: Calico CNI using IPs: [192.168.13.130/32] ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndnxh" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.391727 containerd[1789]: 2024-07-02 06:56:05.343 [INFO][4767] dataplane_linux.go 68: Setting the host side veth name to cali6e5b627e20b ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndnxh" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.391727 containerd[1789]: 2024-07-02 06:56:05.351 [INFO][4767] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndnxh" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.391727 containerd[1789]: 2024-07-02 06:56:05.360 [INFO][4767] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndnxh" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"27063218-f415-4854-a94c-adda458ba699", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d", Pod:"coredns-7db6d8ff4d-ndnxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e5b627e20b", MAC:"ca:0a:1a:05:7f:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:05.391727 containerd[1789]: 2024-07-02 06:56:05.387 [INFO][4767] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndnxh" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:05.468983 containerd[1789]: time="2024-07-02T06:56:05.468876913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:05.469227 containerd[1789]: time="2024-07-02T06:56:05.469018624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:05.469227 containerd[1789]: time="2024-07-02T06:56:05.469083332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:05.469440 containerd[1789]: time="2024-07-02T06:56:05.469221364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:05.499000 audit[4822]: NETFILTER_CFG table=filter:110 family=2 entries=38 op=nft_register_chain pid=4822 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:05.499000 audit[4822]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffea3c3f520 a2=0 a3=7ffea3c3f50c items=0 ppid=4412 pid=4822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:05.499000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:05.520738 systemd[1]: Started cri-containerd-2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d.scope - libcontainer container 2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d. Jul 2 06:56:05.538000 audit: BPF prog-id=158 op=LOAD Jul 2 06:56:05.538000 audit: BPF prog-id=159 op=LOAD Jul 2 06:56:05.538000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4803 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:05.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265623335396661323366373665613765383835373233633764313663 Jul 2 06:56:05.539000 audit: BPF prog-id=160 op=LOAD Jul 2 06:56:05.539000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4803 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:05.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265623335396661323366373665613765383835373233633764313663 Jul 2 06:56:05.539000 audit: BPF prog-id=160 op=UNLOAD Jul 2 06:56:05.539000 audit: BPF prog-id=159 op=UNLOAD Jul 2 06:56:05.539000 audit: BPF prog-id=161 op=LOAD Jul 2 06:56:05.539000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4803 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:05.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265623335396661323366373665613765383835373233633764313663 Jul 2 06:56:05.581467 containerd[1789]: time="2024-07-02T06:56:05.581351815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndnxh,Uid:27063218-f415-4854-a94c-adda458ba699,Namespace:kube-system,Attempt:1,} returns sandbox id \"2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d\"" Jul 2 06:56:05.597630 containerd[1789]: time="2024-07-02T06:56:05.597581305Z" level=info msg="CreateContainer within sandbox \"2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:56:05.695985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144970859.mount: Deactivated successfully. Jul 2 06:56:05.714133 containerd[1789]: time="2024-07-02T06:56:05.714082586Z" level=info msg="CreateContainer within sandbox \"2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67705ca04b51fba67a39135fa417c60fc702694d19ebf598d1fadee8f0942cca\"" Jul 2 06:56:05.715518 containerd[1789]: time="2024-07-02T06:56:05.715337267Z" level=info msg="StartContainer for \"67705ca04b51fba67a39135fa417c60fc702694d19ebf598d1fadee8f0942cca\"" Jul 2 06:56:05.752721 systemd[1]: Started cri-containerd-67705ca04b51fba67a39135fa417c60fc702694d19ebf598d1fadee8f0942cca.scope - libcontainer container 67705ca04b51fba67a39135fa417c60fc702694d19ebf598d1fadee8f0942cca. Jul 2 06:56:05.802000 audit: BPF prog-id=162 op=LOAD Jul 2 06:56:05.803000 audit: BPF prog-id=163 op=LOAD Jul 2 06:56:05.803000 audit[4847]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4803 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:05.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637373035636130346235316662613637613339313335666134313763 Jul 2 06:56:05.803000 audit: BPF prog-id=164 op=LOAD Jul 2 06:56:05.803000 audit[4847]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4803 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:05.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637373035636130346235316662613637613339313335666134313763 Jul 2 06:56:05.803000 audit: BPF prog-id=164 op=UNLOAD Jul 2 06:56:05.803000 audit: BPF prog-id=163 op=UNLOAD Jul 2 06:56:05.803000 audit: BPF prog-id=165 op=LOAD Jul 2 06:56:05.803000 audit[4847]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4803 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:05.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637373035636130346235316662613637613339313335666134313763 Jul 2 06:56:05.827584 containerd[1789]: time="2024-07-02T06:56:05.827532688Z" level=info msg="StartContainer for \"67705ca04b51fba67a39135fa417c60fc702694d19ebf598d1fadee8f0942cca\" returns successfully" Jul 2 06:56:06.108045 systemd[1]: Started sshd@7-172.31.18.4:22-139.178.89.65:52602.service - OpenSSH per-connection server daemon (139.178.89.65:52602). Jul 2 06:56:06.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.4:22-139.178.89.65:52602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:06.425000 audit[4879]: USER_ACCT pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:06.427398 sshd[4879]: Accepted publickey for core from 139.178.89.65 port 52602 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:06.426000 audit[4879]: CRED_ACQ pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:06.427000 audit[4879]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2a379520 a2=3 a3=7f51109a3480 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:06.427000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:06.433006 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:06.444509 systemd-logind[1779]: New session 8 of user core. Jul 2 06:56:06.446884 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 06:56:06.457000 audit[4879]: USER_START pid=4879 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:06.461000 audit[4889]: CRED_ACQ pid=4889 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:06.487393 systemd-networkd[1514]: calid65d9c2bb33: Gained IPv6LL Jul 2 06:56:06.550788 systemd-networkd[1514]: cali6e5b627e20b: Gained IPv6LL Jul 2 06:56:06.650000 audit[4892]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=4892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:06.650000 audit[4892]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff449048f0 a2=0 a3=7fff449048dc items=0 ppid=3284 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:06.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:06.656000 audit[4892]: NETFILTER_CFG table=nat:112 family=2 entries=14 op=nft_register_rule pid=4892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:06.656000 audit[4892]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff449048f0 a2=0 a3=0 items=0 ppid=3284 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:06.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:06.926926 containerd[1789]: time="2024-07-02T06:56:06.925867865Z" level=info msg="StopPodSandbox for \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\"" Jul 2 06:56:06.927515 containerd[1789]: time="2024-07-02T06:56:06.927199415Z" level=info msg="StopPodSandbox for \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\"" Jul 2 06:56:07.278823 containerd[1789]: time="2024-07-02T06:56:07.270581426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:07.286921 containerd[1789]: time="2024-07-02T06:56:07.286361719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 06:56:07.289529 containerd[1789]: time="2024-07-02T06:56:07.289448657Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:07.310472 kubelet[3113]: I0702 06:56:07.309642 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ndnxh" podStartSLOduration=36.309618984 podStartE2EDuration="36.309618984s" podCreationTimestamp="2024-07-02 06:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:56:06.538274483 +0000 UTC m=+48.829146334" watchObservedRunningTime="2024-07-02 06:56:07.309618984 +0000 UTC m=+49.600490831" Jul 2 06:56:07.311037 containerd[1789]: time="2024-07-02T06:56:07.309833767Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:07.329357 containerd[1789]: time="2024-07-02T06:56:07.325552519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:07.329357 containerd[1789]: time="2024-07-02T06:56:07.327898436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.569592237s" Jul 2 06:56:07.329357 containerd[1789]: time="2024-07-02T06:56:07.327959119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 06:56:07.334864 containerd[1789]: time="2024-07-02T06:56:07.334815877Z" level=info msg="CreateContainer within sandbox \"39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 06:56:07.378174 sshd[4879]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:07.395000 audit[4879]: USER_END pid=4879 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:07.396000 audit[4879]: CRED_DISP pid=4879 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:07.405401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729806190.mount: Deactivated successfully. Jul 2 06:56:07.407687 systemd[1]: sshd@7-172.31.18.4:22-139.178.89.65:52602.service: Deactivated successfully. Jul 2 06:56:07.408617 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 06:56:07.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.4:22-139.178.89.65:52602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:07.413079 systemd-logind[1779]: Session 8 logged out. Waiting for processes to exit. Jul 2 06:56:07.415005 systemd-logind[1779]: Removed session 8. Jul 2 06:56:07.426858 containerd[1789]: time="2024-07-02T06:56:07.426798497Z" level=info msg="CreateContainer within sandbox \"39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e85c465ab356934aa618551d557741fbf155947f24aebfbb197a278df6ebd704\"" Jul 2 06:56:07.431156 containerd[1789]: time="2024-07-02T06:56:07.431012475Z" level=info msg="StartContainer for \"e85c465ab356934aa618551d557741fbf155947f24aebfbb197a278df6ebd704\"" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.294 [INFO][4932] k8s.go 608: Cleaning up netns ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.294 [INFO][4932] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" iface="eth0" netns="/var/run/netns/cni-f5d4bdc6-29c8-5809-f847-65022b94308c" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.294 [INFO][4932] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" iface="eth0" netns="/var/run/netns/cni-f5d4bdc6-29c8-5809-f847-65022b94308c" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.298 [INFO][4932] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" iface="eth0" netns="/var/run/netns/cni-f5d4bdc6-29c8-5809-f847-65022b94308c" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.298 [INFO][4932] k8s.go 615: Releasing IP address(es) ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.298 [INFO][4932] utils.go 188: Calico CNI releasing IP address ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.423 [INFO][4947] ipam_plugin.go 411: Releasing address using handleID ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" HandleID="k8s-pod-network.bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.424 [INFO][4947] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.424 [INFO][4947] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.438 [WARNING][4947] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" HandleID="k8s-pod-network.bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.438 [INFO][4947] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" HandleID="k8s-pod-network.bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.441 [INFO][4947] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:07.446478 containerd[1789]: 2024-07-02 06:56:07.443 [INFO][4932] k8s.go 621: Teardown processing complete. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:07.456483 systemd[1]: run-netns-cni\x2df5d4bdc6\x2d29c8\x2d5809\x2df847\x2d65022b94308c.mount: Deactivated successfully. Jul 2 06:56:07.458988 containerd[1789]: time="2024-07-02T06:56:07.457957095Z" level=info msg="TearDown network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\" successfully" Jul 2 06:56:07.458988 containerd[1789]: time="2024-07-02T06:56:07.458004577Z" level=info msg="StopPodSandbox for \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\" returns successfully" Jul 2 06:56:07.459256 containerd[1789]: time="2024-07-02T06:56:07.459217501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66d8cc657c-jrltf,Uid:192f159c-ecbb-42bd-9e06-890e0a3f42d5,Namespace:calico-system,Attempt:1,}" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.360 [INFO][4931] k8s.go 608: Cleaning up netns ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.360 [INFO][4931] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" iface="eth0" netns="/var/run/netns/cni-8b3108bb-55ca-187d-13a3-7d33faa81fae" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.360 [INFO][4931] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" iface="eth0" netns="/var/run/netns/cni-8b3108bb-55ca-187d-13a3-7d33faa81fae" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.361 [INFO][4931] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" iface="eth0" netns="/var/run/netns/cni-8b3108bb-55ca-187d-13a3-7d33faa81fae" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.361 [INFO][4931] k8s.go 615: Releasing IP address(es) ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.361 [INFO][4931] utils.go 188: Calico CNI releasing IP address ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.475 [INFO][4952] ipam_plugin.go 411: Releasing address using handleID ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" HandleID="k8s-pod-network.2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.476 [INFO][4952] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.476 [INFO][4952] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.497 [WARNING][4952] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" HandleID="k8s-pod-network.2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.497 [INFO][4952] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" HandleID="k8s-pod-network.2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.502 [INFO][4952] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:07.518881 containerd[1789]: 2024-07-02 06:56:07.513 [INFO][4931] k8s.go 621: Teardown processing complete. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:07.526871 systemd[1]: run-netns-cni\x2d8b3108bb\x2d55ca\x2d187d\x2d13a3\x2d7d33faa81fae.mount: Deactivated successfully. Jul 2 06:56:07.535011 containerd[1789]: time="2024-07-02T06:56:07.534947663Z" level=info msg="TearDown network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\" successfully" Jul 2 06:56:07.535200 containerd[1789]: time="2024-07-02T06:56:07.535173923Z" level=info msg="StopPodSandbox for \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\" returns successfully" Jul 2 06:56:07.536310 containerd[1789]: time="2024-07-02T06:56:07.536193589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cbbzt,Uid:c97b0cef-d13a-4897-9382-2bce2f41c748,Namespace:kube-system,Attempt:1,}" Jul 2 06:56:07.572000 audit[4964]: NETFILTER_CFG table=filter:113 family=2 entries=11 op=nft_register_rule pid=4964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:07.572000 audit[4964]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffeab13fb50 a2=0 a3=7ffeab13fb3c items=0 ppid=3284 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:07.572000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:07.578000 audit[4964]: NETFILTER_CFG table=nat:114 family=2 entries=35 op=nft_register_chain pid=4964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:07.578000 audit[4964]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffeab13fb50 a2=0 a3=7ffeab13fb3c items=0 ppid=3284 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:07.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:07.694781 systemd[1]: Started cri-containerd-e85c465ab356934aa618551d557741fbf155947f24aebfbb197a278df6ebd704.scope - libcontainer container e85c465ab356934aa618551d557741fbf155947f24aebfbb197a278df6ebd704. Jul 2 06:56:07.951000 audit: BPF prog-id=166 op=LOAD Jul 2 06:56:07.951000 audit[4992]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4703 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:07.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356334363561623335363933346161363138353531643535373734 Jul 2 06:56:07.952000 audit: BPF prog-id=167 op=LOAD Jul 2 06:56:07.952000 audit[4992]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4703 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:07.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356334363561623335363933346161363138353531643535373734 Jul 2 06:56:07.952000 audit: BPF prog-id=167 op=UNLOAD Jul 2 06:56:07.952000 audit: BPF prog-id=166 op=UNLOAD Jul 2 06:56:07.952000 audit: BPF prog-id=168 op=LOAD Jul 2 06:56:07.952000 audit[4992]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4703 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:07.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356334363561623335363933346161363138353531643535373734 Jul 2 06:56:08.072662 systemd-networkd[1514]: calie3504a55f48: Link UP Jul 2 06:56:08.074325 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 06:56:08.074423 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie3504a55f48: link becomes ready Jul 2 06:56:08.074928 systemd-networkd[1514]: calie3504a55f48: Gained carrier Jul 2 06:56:08.095805 containerd[1789]: time="2024-07-02T06:56:08.095755113Z" level=info msg="StartContainer for \"e85c465ab356934aa618551d557741fbf155947f24aebfbb197a278df6ebd704\" returns successfully" Jul 2 06:56:08.100307 containerd[1789]: time="2024-07-02T06:56:08.100265694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:07.744 [INFO][4967] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0 calico-kube-controllers-66d8cc657c- calico-system 192f159c-ecbb-42bd-9e06-890e0a3f42d5 833 0 2024-07-02 06:55:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66d8cc657c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-4 calico-kube-controllers-66d8cc657c-jrltf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie3504a55f48 [] []}} ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Namespace="calico-system" Pod="calico-kube-controllers-66d8cc657c-jrltf" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:07.754 [INFO][4967] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Namespace="calico-system" Pod="calico-kube-controllers-66d8cc657c-jrltf" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:07.916 [INFO][5011] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" HandleID="k8s-pod-network.1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:07.993 [INFO][5011] ipam_plugin.go 264: Auto assigning IP ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" HandleID="k8s-pod-network.1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-4", "pod":"calico-kube-controllers-66d8cc657c-jrltf", "timestamp":"2024-07-02 06:56:07.916509178 +0000 UTC"}, Hostname:"ip-172-31-18-4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:07.993 [INFO][5011] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:07.993 [INFO][5011] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:07.993 [INFO][5011] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-4' Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.006 [INFO][5011] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" host="ip-172-31-18-4" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.015 [INFO][5011] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-4" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.021 [INFO][5011] ipam.go 489: Trying affinity for 192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.024 [INFO][5011] ipam.go 155: Attempting to load block cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.028 [INFO][5011] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.028 [INFO][5011] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" host="ip-172-31-18-4" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.032 [INFO][5011] ipam.go 1685: Creating new handle: k8s-pod-network.1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.038 [INFO][5011] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" host="ip-172-31-18-4" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.051 [INFO][5011] ipam.go 1216: Successfully claimed IPs: [192.168.13.131/26] block=192.168.13.128/26 handle="k8s-pod-network.1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" host="ip-172-31-18-4" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.051 [INFO][5011] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.131/26] handle="k8s-pod-network.1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" host="ip-172-31-18-4" Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.051 [INFO][5011] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:08.115005 containerd[1789]: 2024-07-02 06:56:08.051 [INFO][5011] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.131/26] IPv6=[] ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" HandleID="k8s-pod-network.1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:08.116904 containerd[1789]: 2024-07-02 06:56:08.063 [INFO][4967] k8s.go 386: Populated endpoint ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Namespace="calico-system" Pod="calico-kube-controllers-66d8cc657c-jrltf" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0", GenerateName:"calico-kube-controllers-66d8cc657c-", Namespace:"calico-system", SelfLink:"", UID:"192f159c-ecbb-42bd-9e06-890e0a3f42d5", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66d8cc657c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"", Pod:"calico-kube-controllers-66d8cc657c-jrltf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3504a55f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:08.116904 containerd[1789]: 2024-07-02 06:56:08.064 [INFO][4967] k8s.go 387: Calico CNI using IPs: [192.168.13.131/32] ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Namespace="calico-system" Pod="calico-kube-controllers-66d8cc657c-jrltf" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:08.116904 containerd[1789]: 2024-07-02 06:56:08.064 [INFO][4967] dataplane_linux.go 68: Setting the host side veth name to calie3504a55f48 ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Namespace="calico-system" Pod="calico-kube-controllers-66d8cc657c-jrltf" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:08.116904 containerd[1789]: 2024-07-02 06:56:08.075 [INFO][4967] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Namespace="calico-system" Pod="calico-kube-controllers-66d8cc657c-jrltf" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:08.116904 containerd[1789]: 2024-07-02 06:56:08.082 [INFO][4967] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Namespace="calico-system" Pod="calico-kube-controllers-66d8cc657c-jrltf" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0", GenerateName:"calico-kube-controllers-66d8cc657c-", Namespace:"calico-system", SelfLink:"", UID:"192f159c-ecbb-42bd-9e06-890e0a3f42d5", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66d8cc657c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e", Pod:"calico-kube-controllers-66d8cc657c-jrltf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3504a55f48", MAC:"0e:61:1a:08:f9:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:08.116904 containerd[1789]: 2024-07-02 06:56:08.101 [INFO][4967] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e" Namespace="calico-system" Pod="calico-kube-controllers-66d8cc657c-jrltf" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:08.168067 containerd[1789]: time="2024-07-02T06:56:08.167687039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:08.168067 containerd[1789]: time="2024-07-02T06:56:08.167759710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:08.168067 containerd[1789]: time="2024-07-02T06:56:08.167788062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:08.168067 containerd[1789]: time="2024-07-02T06:56:08.167809352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:08.181874 systemd-networkd[1514]: cali19b5a0fda32: Link UP Jul 2 06:56:08.183152 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali19b5a0fda32: link becomes ready Jul 2 06:56:08.183401 systemd-networkd[1514]: cali19b5a0fda32: Gained carrier Jul 2 06:56:08.218738 systemd[1]: Started cri-containerd-1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e.scope - libcontainer container 1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e. Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:07.816 [INFO][4985] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0 coredns-7db6d8ff4d- kube-system c97b0cef-d13a-4897-9382-2bce2f41c748 834 0 2024-07-02 06:55:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-4 coredns-7db6d8ff4d-cbbzt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali19b5a0fda32 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cbbzt" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:07.816 [INFO][4985] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cbbzt" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.039 [INFO][5020] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" HandleID="k8s-pod-network.ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.059 [INFO][5020] ipam_plugin.go 264: Auto assigning IP ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" HandleID="k8s-pod-network.ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034f100), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-4", "pod":"coredns-7db6d8ff4d-cbbzt", "timestamp":"2024-07-02 06:56:08.039407071 +0000 UTC"}, Hostname:"ip-172-31-18-4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.059 [INFO][5020] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.060 [INFO][5020] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.060 [INFO][5020] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-4' Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.062 [INFO][5020] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" host="ip-172-31-18-4" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.076 [INFO][5020] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-4" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.094 [INFO][5020] ipam.go 489: Trying affinity for 192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.105 [INFO][5020] ipam.go 155: Attempting to load block cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.113 [INFO][5020] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.113 [INFO][5020] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" host="ip-172-31-18-4" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.117 [INFO][5020] ipam.go 1685: Creating new handle: k8s-pod-network.ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5 Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.127 [INFO][5020] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" host="ip-172-31-18-4" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.171 [INFO][5020] ipam.go 1216: Successfully claimed IPs: [192.168.13.132/26] block=192.168.13.128/26 handle="k8s-pod-network.ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" host="ip-172-31-18-4" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.171 [INFO][5020] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.132/26] handle="k8s-pod-network.ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" host="ip-172-31-18-4" Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.171 [INFO][5020] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:08.239697 containerd[1789]: 2024-07-02 06:56:08.171 [INFO][5020] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.132/26] IPv6=[] ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" HandleID="k8s-pod-network.ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:08.240724 containerd[1789]: 2024-07-02 06:56:08.173 [INFO][4985] k8s.go 386: Populated endpoint ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cbbzt" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c97b0cef-d13a-4897-9382-2bce2f41c748", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"", Pod:"coredns-7db6d8ff4d-cbbzt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19b5a0fda32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:08.240724 containerd[1789]: 2024-07-02 06:56:08.174 [INFO][4985] k8s.go 387: Calico CNI using IPs: [192.168.13.132/32] ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cbbzt" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:08.240724 containerd[1789]: 2024-07-02 06:56:08.174 [INFO][4985] dataplane_linux.go 68: Setting the host side veth name to cali19b5a0fda32 ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cbbzt" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:08.240724 containerd[1789]: 2024-07-02 06:56:08.186 [INFO][4985] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cbbzt" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:08.240724 containerd[1789]: 2024-07-02 06:56:08.187 [INFO][4985] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cbbzt" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c97b0cef-d13a-4897-9382-2bce2f41c748", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5", Pod:"coredns-7db6d8ff4d-cbbzt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19b5a0fda32", MAC:"3a:92:1c:10:36:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:08.240724 containerd[1789]: 2024-07-02 06:56:08.237 [INFO][4985] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cbbzt" WorkloadEndpoint="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:08.242000 audit: BPF prog-id=169 op=LOAD Jul 2 06:56:08.242000 audit: BPF prog-id=170 op=LOAD Jul 2 06:56:08.242000 audit[5071]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5060 pid=5071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393632666630316261633531626132326139396563373837643066 Jul 2 06:56:08.243000 audit: BPF prog-id=171 op=LOAD Jul 2 06:56:08.243000 audit[5071]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5060 pid=5071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393632666630316261633531626132326139396563373837643066 Jul 2 06:56:08.244000 audit: BPF prog-id=171 op=UNLOAD Jul 2 06:56:08.244000 audit: BPF prog-id=170 op=UNLOAD Jul 2 06:56:08.244000 audit: BPF prog-id=172 op=LOAD Jul 2 06:56:08.244000 audit[5071]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5060 pid=5071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393632666630316261633531626132326139396563373837643066 Jul 2 06:56:08.236000 audit[5072]: NETFILTER_CFG table=filter:115 family=2 entries=38 op=nft_register_chain pid=5072 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:08.236000 audit[5072]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7ffe86445860 a2=0 a3=7ffe8644584c items=0 ppid=4412 pid=5072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.236000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:08.285070 containerd[1789]: time="2024-07-02T06:56:08.284811305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:08.285070 containerd[1789]: time="2024-07-02T06:56:08.284873880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:08.285070 containerd[1789]: time="2024-07-02T06:56:08.284894697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:08.285070 containerd[1789]: time="2024-07-02T06:56:08.284909407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:08.310728 systemd[1]: Started cri-containerd-ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5.scope - libcontainer container ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5. Jul 2 06:56:08.333000 audit: BPF prog-id=173 op=LOAD Jul 2 06:56:08.333000 audit: BPF prog-id=174 op=LOAD Jul 2 06:56:08.333000 audit[5117]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=5108 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563363266353661643137376238653030396430373261323539666161 Jul 2 06:56:08.334000 audit: BPF prog-id=175 op=LOAD Jul 2 06:56:08.334000 audit[5117]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=5108 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563363266353661643137376238653030396430373261323539666161 Jul 2 06:56:08.334000 audit: BPF prog-id=175 op=UNLOAD Jul 2 06:56:08.334000 audit: BPF prog-id=174 op=UNLOAD Jul 2 06:56:08.334000 audit: BPF prog-id=176 op=LOAD Jul 2 06:56:08.334000 audit[5117]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=5108 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563363266353661643137376238653030396430373261323539666161 Jul 2 06:56:08.533686 containerd[1789]: time="2024-07-02T06:56:08.533556729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66d8cc657c-jrltf,Uid:192f159c-ecbb-42bd-9e06-890e0a3f42d5,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e\"" Jul 2 06:56:08.572699 containerd[1789]: time="2024-07-02T06:56:08.572650346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cbbzt,Uid:c97b0cef-d13a-4897-9382-2bce2f41c748,Namespace:kube-system,Attempt:1,} returns sandbox id \"ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5\"" Jul 2 06:56:08.575000 audit[5143]: NETFILTER_CFG table=filter:116 family=2 entries=38 op=nft_register_chain pid=5143 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:08.575000 audit[5143]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7fff0f6de610 a2=0 a3=7fff0f6de5fc items=0 ppid=4412 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.575000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:08.585039 containerd[1789]: time="2024-07-02T06:56:08.584966735Z" level=info msg="CreateContainer within sandbox \"ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:56:08.618047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746383871.mount: Deactivated successfully. Jul 2 06:56:08.650267 containerd[1789]: time="2024-07-02T06:56:08.650195150Z" level=info msg="CreateContainer within sandbox \"ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc5899a8d1a81c6026b59909be33279c695bba4dee9df76830f6fe157016bd04\"" Jul 2 06:56:08.662110 containerd[1789]: time="2024-07-02T06:56:08.662062471Z" level=info msg="StartContainer for \"bc5899a8d1a81c6026b59909be33279c695bba4dee9df76830f6fe157016bd04\"" Jul 2 06:56:08.721690 systemd[1]: Started cri-containerd-bc5899a8d1a81c6026b59909be33279c695bba4dee9df76830f6fe157016bd04.scope - libcontainer container bc5899a8d1a81c6026b59909be33279c695bba4dee9df76830f6fe157016bd04. Jul 2 06:56:08.745000 audit: BPF prog-id=177 op=LOAD Jul 2 06:56:08.746000 audit: BPF prog-id=178 op=LOAD Jul 2 06:56:08.746000 audit[5161]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5108 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263353839396138643161383163363032366235393930396265333332 Jul 2 06:56:08.746000 audit: BPF prog-id=179 op=LOAD Jul 2 06:56:08.746000 audit[5161]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5108 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263353839396138643161383163363032366235393930396265333332 Jul 2 06:56:08.746000 audit: BPF prog-id=179 op=UNLOAD Jul 2 06:56:08.746000 audit: BPF prog-id=178 op=UNLOAD Jul 2 06:56:08.746000 audit: BPF prog-id=180 op=LOAD Jul 2 06:56:08.746000 audit[5161]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5108 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:08.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263353839396138643161383163363032366235393930396265333332 Jul 2 06:56:08.786076 containerd[1789]: time="2024-07-02T06:56:08.785943844Z" level=info msg="StartContainer for \"bc5899a8d1a81c6026b59909be33279c695bba4dee9df76830f6fe157016bd04\" returns successfully" Jul 2 06:56:09.571625 kubelet[3113]: I0702 06:56:09.571029 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cbbzt" podStartSLOduration=38.571005138 podStartE2EDuration="38.571005138s" podCreationTimestamp="2024-07-02 06:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:56:09.549826926 +0000 UTC m=+51.840698775" watchObservedRunningTime="2024-07-02 06:56:09.571005138 +0000 UTC m=+51.861876980" Jul 2 06:56:09.680000 audit[5190]: NETFILTER_CFG table=filter:117 family=2 entries=8 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:09.681521 kernel: kauditd_printk_skb: 108 callbacks suppressed Jul 2 06:56:09.681622 kernel: audit: type=1325 audit(1719903369.680:625): table=filter:117 family=2 entries=8 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:09.680000 audit[5190]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdf3630060 a2=0 a3=7ffdf363004c items=0 ppid=3284 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:09.686513 kernel: audit: type=1300 audit(1719903369.680:625): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdf3630060 a2=0 a3=7ffdf363004c items=0 ppid=3284 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:09.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:09.688386 kernel: audit: type=1327 audit(1719903369.680:625): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:09.687382 systemd-networkd[1514]: cali19b5a0fda32: Gained IPv6LL Jul 2 06:56:09.751721 systemd-networkd[1514]: calie3504a55f48: Gained IPv6LL Jul 2 06:56:09.780803 kernel: audit: type=1325 audit(1719903369.683:626): table=nat:118 family=2 entries=44 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:09.780934 kernel: audit: type=1300 audit(1719903369.683:626): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdf3630060 a2=0 a3=7ffdf363004c items=0 ppid=3284 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:09.780969 kernel: audit: type=1327 audit(1719903369.683:626): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:09.683000 audit[5190]: NETFILTER_CFG table=nat:118 family=2 entries=44 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:09.683000 audit[5190]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdf3630060 a2=0 a3=7ffdf363004c items=0 ppid=3284 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:09.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:09.789000 audit[5192]: NETFILTER_CFG table=filter:119 family=2 entries=8 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:09.789000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe479bd6e0 a2=0 a3=7ffe479bd6cc items=0 ppid=3284 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:09.791781 kernel: audit: type=1325 audit(1719903369.789:627): table=filter:119 family=2 entries=8 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:09.791877 kernel: audit: type=1300 audit(1719903369.789:627): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe479bd6e0 a2=0 a3=7ffe479bd6cc items=0 ppid=3284 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:09.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:09.797765 kernel: audit: type=1327 audit(1719903369.789:627): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:09.818000 audit[5192]: NETFILTER_CFG table=nat:120 family=2 entries=56 op=nft_register_chain pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:09.821526 kernel: audit: type=1325 audit(1719903369.818:628): table=nat:120 family=2 entries=56 op=nft_register_chain pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:09.818000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe479bd6e0 a2=0 a3=7ffe479bd6cc items=0 ppid=3284 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:09.818000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:10.242028 containerd[1789]: time="2024-07-02T06:56:10.241980344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:10.243454 containerd[1789]: time="2024-07-02T06:56:10.243396830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 06:56:10.245070 containerd[1789]: time="2024-07-02T06:56:10.245032088Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:10.248451 containerd[1789]: time="2024-07-02T06:56:10.248407410Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:10.251627 containerd[1789]: time="2024-07-02T06:56:10.251583055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:10.252621 containerd[1789]: time="2024-07-02T06:56:10.252579285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.152105414s" Jul 2 06:56:10.252776 containerd[1789]: time="2024-07-02T06:56:10.252749989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 06:56:10.254813 containerd[1789]: time="2024-07-02T06:56:10.254772491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 06:56:10.258169 containerd[1789]: time="2024-07-02T06:56:10.258131267Z" level=info msg="CreateContainer within sandbox \"39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 06:56:10.290391 containerd[1789]: time="2024-07-02T06:56:10.290340195Z" level=info msg="CreateContainer within sandbox \"39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ebbb45e212ced50bb35475b17ab9694e1bed7af2e80be102d99e7af6d64f8942\"" Jul 2 06:56:10.291659 containerd[1789]: time="2024-07-02T06:56:10.291579446Z" level=info msg="StartContainer for \"ebbb45e212ced50bb35475b17ab9694e1bed7af2e80be102d99e7af6d64f8942\"" Jul 2 06:56:10.389078 systemd[1]: run-containerd-runc-k8s.io-ebbb45e212ced50bb35475b17ab9694e1bed7af2e80be102d99e7af6d64f8942-runc.aXTZGy.mount: Deactivated successfully. Jul 2 06:56:10.407741 systemd[1]: Started cri-containerd-ebbb45e212ced50bb35475b17ab9694e1bed7af2e80be102d99e7af6d64f8942.scope - libcontainer container ebbb45e212ced50bb35475b17ab9694e1bed7af2e80be102d99e7af6d64f8942. Jul 2 06:56:10.496000 audit: BPF prog-id=181 op=LOAD Jul 2 06:56:10.496000 audit[5205]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4703 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:10.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562626234356532313263656435306262333534373562313761623936 Jul 2 06:56:10.496000 audit: BPF prog-id=182 op=LOAD Jul 2 06:56:10.496000 audit[5205]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4703 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:10.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562626234356532313263656435306262333534373562313761623936 Jul 2 06:56:10.496000 audit: BPF prog-id=182 op=UNLOAD Jul 2 06:56:10.496000 audit: BPF prog-id=181 op=UNLOAD Jul 2 06:56:10.496000 audit: BPF prog-id=183 op=LOAD Jul 2 06:56:10.496000 audit[5205]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4703 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:10.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562626234356532313263656435306262333534373562313761623936 Jul 2 06:56:10.592769 containerd[1789]: time="2024-07-02T06:56:10.592709796Z" level=info msg="StartContainer for \"ebbb45e212ced50bb35475b17ab9694e1bed7af2e80be102d99e7af6d64f8942\" returns successfully" Jul 2 06:56:11.291535 kubelet[3113]: I0702 06:56:11.291464 3113 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 06:56:11.292140 kubelet[3113]: I0702 06:56:11.292122 3113 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 06:56:12.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.4:22-139.178.89.65:50900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:12.416077 systemd[1]: Started sshd@8-172.31.18.4:22-139.178.89.65:50900.service - OpenSSH per-connection server daemon (139.178.89.65:50900). Jul 2 06:56:12.651000 audit[5248]: USER_ACCT pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:12.654000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:12.654000 audit[5248]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9ed546c0 a2=3 a3=7fceeb535480 items=0 ppid=1 pid=5248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:12.654000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:12.655832 sshd[5248]: Accepted publickey for core from 139.178.89.65 port 50900 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:12.663053 sshd[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:12.675213 systemd-logind[1779]: New session 9 of user core. Jul 2 06:56:12.678739 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 06:56:12.689000 audit[5248]: USER_START pid=5248 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:12.692000 audit[5254]: CRED_ACQ pid=5254 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:13.295093 sshd[5248]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:13.296000 audit[5248]: USER_END pid=5248 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:13.296000 audit[5248]: CRED_DISP pid=5248 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:13.299959 systemd-logind[1779]: Session 9 logged out. Waiting for processes to exit. Jul 2 06:56:13.301119 systemd[1]: sshd@8-172.31.18.4:22-139.178.89.65:50900.service: Deactivated successfully. Jul 2 06:56:13.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.4:22-139.178.89.65:50900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:13.302216 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 06:56:13.303883 systemd-logind[1779]: Removed session 9. Jul 2 06:56:13.464000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:13.464000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00249a900 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:56:13.464000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:56:13.465000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:13.465000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0020f9ce0 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:56:13.465000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:56:14.285000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:14.285000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c0103324e0 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:56:14.285000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:56:14.287000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7806 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:14.287000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c010f45c50 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:56:14.287000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:56:14.289000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:14.289000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c010f45c80 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:56:14.289000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:56:14.291802 containerd[1789]: time="2024-07-02T06:56:14.291758142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:14.294040 containerd[1789]: time="2024-07-02T06:56:14.293983676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 06:56:14.295000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:14.295000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c008525420 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:56:14.295000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:56:14.296881 containerd[1789]: time="2024-07-02T06:56:14.296836985Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:14.300000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:14.300000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c010332630 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:56:14.300000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:56:14.302000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:14.302000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c0082444e0 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:56:14.302000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:56:14.324354 containerd[1789]: time="2024-07-02T06:56:14.324311096Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:14.327254 containerd[1789]: time="2024-07-02T06:56:14.327108910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:14.328606 containerd[1789]: time="2024-07-02T06:56:14.328560607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.073747109s" Jul 2 06:56:14.328858 containerd[1789]: time="2024-07-02T06:56:14.328828906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 06:56:14.381627 containerd[1789]: time="2024-07-02T06:56:14.381395501Z" level=info msg="CreateContainer within sandbox \"1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 06:56:14.410776 containerd[1789]: time="2024-07-02T06:56:14.410720964Z" level=info msg="CreateContainer within sandbox \"1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8\"" Jul 2 06:56:14.411583 containerd[1789]: time="2024-07-02T06:56:14.411541616Z" level=info msg="StartContainer for \"aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8\"" Jul 2 06:56:14.413036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725655983.mount: Deactivated successfully. Jul 2 06:56:14.521848 systemd[1]: Started cri-containerd-aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8.scope - libcontainer container aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8. Jul 2 06:56:14.624000 audit: BPF prog-id=184 op=LOAD Jul 2 06:56:14.625000 audit: BPF prog-id=185 op=LOAD Jul 2 06:56:14.625000 audit[5276]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=5060 pid=5276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:14.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333064643637323631323634633230643330316533363638376535 Jul 2 06:56:14.625000 audit: BPF prog-id=186 op=LOAD Jul 2 06:56:14.625000 audit[5276]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=5060 pid=5276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:14.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333064643637323631323634633230643330316533363638376535 Jul 2 06:56:14.625000 audit: BPF prog-id=186 op=UNLOAD Jul 2 06:56:14.626000 audit: BPF prog-id=185 op=UNLOAD Jul 2 06:56:14.626000 audit: BPF prog-id=187 op=LOAD Jul 2 06:56:14.626000 audit[5276]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=5060 pid=5276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:14.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333064643637323631323634633230643330316533363638376535 Jul 2 06:56:14.794409 containerd[1789]: time="2024-07-02T06:56:14.794348969Z" level=info msg="StartContainer for \"aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8\" returns successfully" Jul 2 06:56:15.650156 kubelet[3113]: I0702 06:56:15.648420 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j88n9" podStartSLOduration=32.151456128 podStartE2EDuration="37.648398873s" podCreationTimestamp="2024-07-02 06:55:38 +0000 UTC" firstStartedPulling="2024-07-02 06:56:04.756829776 +0000 UTC m=+47.047701614" lastFinishedPulling="2024-07-02 06:56:10.253772509 +0000 UTC m=+52.544644359" observedRunningTime="2024-07-02 06:56:11.628034604 +0000 UTC m=+53.918906451" watchObservedRunningTime="2024-07-02 06:56:15.648398873 +0000 UTC m=+57.939270721" Jul 2 06:56:15.676373 systemd[1]: run-containerd-runc-k8s.io-aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8-runc.JwovfN.mount: Deactivated successfully. Jul 2 06:56:15.742680 kubelet[3113]: I0702 06:56:15.742091 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66d8cc657c-jrltf" podStartSLOduration=30.935151815 podStartE2EDuration="36.742069337s" podCreationTimestamp="2024-07-02 06:55:39 +0000 UTC" firstStartedPulling="2024-07-02 06:56:08.541422128 +0000 UTC m=+50.832293969" lastFinishedPulling="2024-07-02 06:56:14.348339657 +0000 UTC m=+56.639211491" observedRunningTime="2024-07-02 06:56:15.650110231 +0000 UTC m=+57.940982080" watchObservedRunningTime="2024-07-02 06:56:15.742069337 +0000 UTC m=+58.032941186" Jul 2 06:56:17.165420 kernel: kauditd_printk_skb: 60 callbacks suppressed Jul 2 06:56:17.165603 kernel: audit: type=1400 audit(1719903377.157:657): avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:17.165650 kernel: audit: type=1300 audit(1719903377.157:657): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002e9eac0 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:56:17.157000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:17.157000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002e9eac0 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:56:17.157000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:56:17.169512 kernel: audit: type=1327 audit(1719903377.157:657): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:56:17.163000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:17.172625 kernel: audit: type=1400 audit(1719903377.163:658): avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:17.163000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002e9eae0 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:56:17.176536 kernel: audit: type=1300 audit(1719903377.163:658): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002e9eae0 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:56:17.163000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:56:17.181766 kernel: audit: type=1327 audit(1719903377.163:658): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:56:17.182004 kernel: audit: type=1400 audit(1719903377.164:659): avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:17.164000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:17.164000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002e9eb00 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:56:17.185268 kernel: audit: type=1300 audit(1719903377.164:659): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002e9eb00 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:56:17.164000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:56:17.188429 kernel: audit: type=1327 audit(1719903377.164:659): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:56:17.165000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:17.165000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00323cd20 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:56:17.191645 kernel: audit: type=1400 audit(1719903377.165:660): avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:56:17.165000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:56:17.932101 containerd[1789]: time="2024-07-02T06:56:17.932016765Z" level=info msg="StopPodSandbox for \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\"" Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:17.988 [WARNING][5337] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c97b0cef-d13a-4897-9382-2bce2f41c748", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5", Pod:"coredns-7db6d8ff4d-cbbzt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19b5a0fda32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:17.989 [INFO][5337] k8s.go 608: Cleaning up netns ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:17.989 [INFO][5337] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" iface="eth0" netns="" Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:17.989 [INFO][5337] k8s.go 615: Releasing IP address(es) ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:17.989 [INFO][5337] utils.go 188: Calico CNI releasing IP address ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:18.026 [INFO][5345] ipam_plugin.go 411: Releasing address using handleID ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" HandleID="k8s-pod-network.2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:18.026 [INFO][5345] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:18.027 [INFO][5345] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:18.048 [WARNING][5345] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" HandleID="k8s-pod-network.2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:18.048 [INFO][5345] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" HandleID="k8s-pod-network.2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:18.050 [INFO][5345] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:18.053663 containerd[1789]: 2024-07-02 06:56:18.051 [INFO][5337] k8s.go 621: Teardown processing complete. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:18.054638 containerd[1789]: time="2024-07-02T06:56:18.054571434Z" level=info msg="TearDown network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\" successfully" Jul 2 06:56:18.054743 containerd[1789]: time="2024-07-02T06:56:18.054636023Z" level=info msg="StopPodSandbox for \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\" returns successfully" Jul 2 06:56:18.055281 containerd[1789]: time="2024-07-02T06:56:18.055249652Z" level=info msg="RemovePodSandbox for \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\"" Jul 2 06:56:18.055380 containerd[1789]: time="2024-07-02T06:56:18.055293386Z" level=info msg="Forcibly stopping sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\"" Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.122 [WARNING][5365] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c97b0cef-d13a-4897-9382-2bce2f41c748", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"ec62f56ad177b8e009d072a259faa2615ec1dd11ebecaf0207e8f322c68eb5f5", Pod:"coredns-7db6d8ff4d-cbbzt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19b5a0fda32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.122 [INFO][5365] k8s.go 608: Cleaning up netns ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.122 [INFO][5365] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" iface="eth0" netns="" Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.123 [INFO][5365] k8s.go 615: Releasing IP address(es) ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.123 [INFO][5365] utils.go 188: Calico CNI releasing IP address ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.173 [INFO][5371] ipam_plugin.go 411: Releasing address using handleID ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" HandleID="k8s-pod-network.2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.173 [INFO][5371] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.173 [INFO][5371] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.185 [WARNING][5371] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" HandleID="k8s-pod-network.2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.185 [INFO][5371] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" HandleID="k8s-pod-network.2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--cbbzt-eth0" Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.188 [INFO][5371] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:18.191836 containerd[1789]: 2024-07-02 06:56:18.189 [INFO][5365] k8s.go 621: Teardown processing complete. ContainerID="2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff" Jul 2 06:56:18.194001 containerd[1789]: time="2024-07-02T06:56:18.193124316Z" level=info msg="TearDown network for sandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\" successfully" Jul 2 06:56:18.225425 containerd[1789]: time="2024-07-02T06:56:18.225370859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:56:18.230787 containerd[1789]: time="2024-07-02T06:56:18.230730346Z" level=info msg="RemovePodSandbox \"2fd50e983d77eb8d8a6b171a420ccf6044f7c618b5d1302731b0cf57b6977eff\" returns successfully" Jul 2 06:56:18.231472 containerd[1789]: time="2024-07-02T06:56:18.231440111Z" level=info msg="StopPodSandbox for \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\"" Jul 2 06:56:18.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.4:22-139.178.89.65:38210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:18.356645 systemd[1]: Started sshd@9-172.31.18.4:22-139.178.89.65:38210.service - OpenSSH per-connection server daemon (139.178.89.65:38210). Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.274 [WARNING][5390] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"27063218-f415-4854-a94c-adda458ba699", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d", Pod:"coredns-7db6d8ff4d-ndnxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e5b627e20b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.275 [INFO][5390] k8s.go 608: Cleaning up netns ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.275 [INFO][5390] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" iface="eth0" netns="" Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.275 [INFO][5390] k8s.go 615: Releasing IP address(es) ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.275 [INFO][5390] utils.go 188: Calico CNI releasing IP address ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.303 [INFO][5396] ipam_plugin.go 411: Releasing address using handleID ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" HandleID="k8s-pod-network.3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.303 [INFO][5396] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.303 [INFO][5396] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.322 [WARNING][5396] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" HandleID="k8s-pod-network.3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.322 [INFO][5396] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" HandleID="k8s-pod-network.3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.329 [INFO][5396] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:18.365759 containerd[1789]: 2024-07-02 06:56:18.360 [INFO][5390] k8s.go 621: Teardown processing complete. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:18.366916 containerd[1789]: time="2024-07-02T06:56:18.366871659Z" level=info msg="TearDown network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\" successfully" Jul 2 06:56:18.367093 containerd[1789]: time="2024-07-02T06:56:18.367065890Z" level=info msg="StopPodSandbox for \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\" returns successfully" Jul 2 06:56:18.369411 containerd[1789]: time="2024-07-02T06:56:18.368157644Z" level=info msg="RemovePodSandbox for \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\"" Jul 2 06:56:18.369767 containerd[1789]: time="2024-07-02T06:56:18.369700703Z" level=info msg="Forcibly stopping sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\"" Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.450 [WARNING][5418] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"27063218-f415-4854-a94c-adda458ba699", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"2eb359fa23f76ea7e885723c7d16c3eaa890a1b1df1f8a39e5ffc96d829dc21d", Pod:"coredns-7db6d8ff4d-ndnxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e5b627e20b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.451 [INFO][5418] k8s.go 608: Cleaning up netns ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.451 [INFO][5418] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" iface="eth0" netns="" Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.451 [INFO][5418] k8s.go 615: Releasing IP address(es) ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.451 [INFO][5418] utils.go 188: Calico CNI releasing IP address ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.490 [INFO][5425] ipam_plugin.go 411: Releasing address using handleID ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" HandleID="k8s-pod-network.3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.490 [INFO][5425] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.490 [INFO][5425] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.500 [WARNING][5425] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" HandleID="k8s-pod-network.3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.501 [INFO][5425] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" HandleID="k8s-pod-network.3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Workload="ip--172--31--18--4-k8s-coredns--7db6d8ff4d--ndnxh-eth0" Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.503 [INFO][5425] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:18.509114 containerd[1789]: 2024-07-02 06:56:18.505 [INFO][5418] k8s.go 621: Teardown processing complete. ContainerID="3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c" Jul 2 06:56:18.510271 containerd[1789]: time="2024-07-02T06:56:18.510180722Z" level=info msg="TearDown network for sandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\" successfully" Jul 2 06:56:18.553934 containerd[1789]: time="2024-07-02T06:56:18.553804952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:56:18.554135 containerd[1789]: time="2024-07-02T06:56:18.553978367Z" level=info msg="RemovePodSandbox \"3b56223a6301d34d2f252b3e4747efa3a8f3f3b145f556f9b010c6808ebb1a1c\" returns successfully" Jul 2 06:56:18.554780 containerd[1789]: time="2024-07-02T06:56:18.554678891Z" level=info msg="StopPodSandbox for \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\"" Jul 2 06:56:18.573000 audit[5403]: USER_ACCT pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:18.575504 sshd[5403]: Accepted publickey for core from 139.178.89.65 port 38210 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:18.575000 audit[5403]: CRED_ACQ pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:18.575000 audit[5403]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe883fdc20 a2=3 a3=7f9d3a886480 items=0 ppid=1 pid=5403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:18.575000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:18.590253 systemd-logind[1779]: New session 10 of user core. Jul 2 06:56:18.578858 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:18.595017 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 06:56:18.616000 audit[5403]: USER_START pid=5403 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:18.620000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.649 [WARNING][5443] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0", GenerateName:"calico-kube-controllers-66d8cc657c-", Namespace:"calico-system", SelfLink:"", UID:"192f159c-ecbb-42bd-9e06-890e0a3f42d5", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66d8cc657c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e", Pod:"calico-kube-controllers-66d8cc657c-jrltf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3504a55f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.649 [INFO][5443] k8s.go 608: Cleaning up netns ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.649 [INFO][5443] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" iface="eth0" netns="" Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.649 [INFO][5443] k8s.go 615: Releasing IP address(es) ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.649 [INFO][5443] utils.go 188: Calico CNI releasing IP address ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.746 [INFO][5453] ipam_plugin.go 411: Releasing address using handleID ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" HandleID="k8s-pod-network.bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.746 [INFO][5453] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.746 [INFO][5453] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.777 [WARNING][5453] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" HandleID="k8s-pod-network.bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.777 [INFO][5453] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" HandleID="k8s-pod-network.bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.780 [INFO][5453] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:18.784573 containerd[1789]: 2024-07-02 06:56:18.782 [INFO][5443] k8s.go 621: Teardown processing complete. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:18.785338 containerd[1789]: time="2024-07-02T06:56:18.784573547Z" level=info msg="TearDown network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\" successfully" Jul 2 06:56:18.785338 containerd[1789]: time="2024-07-02T06:56:18.784620764Z" level=info msg="StopPodSandbox for \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\" returns successfully" Jul 2 06:56:18.787194 containerd[1789]: time="2024-07-02T06:56:18.786253952Z" level=info msg="RemovePodSandbox for \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\"" Jul 2 06:56:18.787194 containerd[1789]: time="2024-07-02T06:56:18.786303997Z" level=info msg="Forcibly stopping sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\"" Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.852 [WARNING][5481] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0", GenerateName:"calico-kube-controllers-66d8cc657c-", Namespace:"calico-system", SelfLink:"", UID:"192f159c-ecbb-42bd-9e06-890e0a3f42d5", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66d8cc657c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"1b962ff01bac51ba22a99ec787d0f95a324e5321c41cbab1811fd9bca20b8c7e", Pod:"calico-kube-controllers-66d8cc657c-jrltf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3504a55f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.853 [INFO][5481] k8s.go 608: Cleaning up netns ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.853 [INFO][5481] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" iface="eth0" netns="" Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.853 [INFO][5481] k8s.go 615: Releasing IP address(es) ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.853 [INFO][5481] utils.go 188: Calico CNI releasing IP address ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.899 [INFO][5490] ipam_plugin.go 411: Releasing address using handleID ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" HandleID="k8s-pod-network.bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.899 [INFO][5490] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.899 [INFO][5490] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.907 [WARNING][5490] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" HandleID="k8s-pod-network.bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.907 [INFO][5490] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" HandleID="k8s-pod-network.bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Workload="ip--172--31--18--4-k8s-calico--kube--controllers--66d8cc657c--jrltf-eth0" Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.910 [INFO][5490] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:18.914433 containerd[1789]: 2024-07-02 06:56:18.912 [INFO][5481] k8s.go 621: Teardown processing complete. ContainerID="bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078" Jul 2 06:56:18.914433 containerd[1789]: time="2024-07-02T06:56:18.914178801Z" level=info msg="TearDown network for sandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\" successfully" Jul 2 06:56:18.919222 containerd[1789]: time="2024-07-02T06:56:18.919180235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:56:18.919436 containerd[1789]: time="2024-07-02T06:56:18.919409520Z" level=info msg="RemovePodSandbox \"bc695bdf15ae33435b057aa3a8961332dacabaad5ccb56222b7e6e6f591c5078\" returns successfully" Jul 2 06:56:18.920073 containerd[1789]: time="2024-07-02T06:56:18.920046667Z" level=info msg="StopPodSandbox for \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\"" Jul 2 06:56:18.927239 containerd[1789]: time="2024-07-02T06:56:18.920238308Z" level=info msg="TearDown network for sandbox \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\" successfully" Jul 2 06:56:18.927410 containerd[1789]: time="2024-07-02T06:56:18.927391196Z" level=info msg="StopPodSandbox for \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\" returns successfully" Jul 2 06:56:18.928021 containerd[1789]: time="2024-07-02T06:56:18.927991529Z" level=info msg="RemovePodSandbox for \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\"" Jul 2 06:56:18.928234 containerd[1789]: time="2024-07-02T06:56:18.928159612Z" level=info msg="Forcibly stopping sandbox \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\"" Jul 2 06:56:18.928404 containerd[1789]: time="2024-07-02T06:56:18.928382319Z" level=info msg="TearDown network for sandbox \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\" successfully" Jul 2 06:56:18.935685 containerd[1789]: time="2024-07-02T06:56:18.935344326Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:56:18.936932 containerd[1789]: time="2024-07-02T06:56:18.936898035Z" level=info msg="RemovePodSandbox \"44486b2a81df68d8df5827a10adb98361e2ff09ad6f4378d025ee7d28a7febc0\" returns successfully" Jul 2 06:56:18.937616 containerd[1789]: time="2024-07-02T06:56:18.937589976Z" level=info msg="StopPodSandbox for \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\"" Jul 2 06:56:18.944644 containerd[1789]: time="2024-07-02T06:56:18.944550568Z" level=info msg="TearDown network for sandbox \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\" successfully" Jul 2 06:56:18.944828 containerd[1789]: time="2024-07-02T06:56:18.944798405Z" level=info msg="StopPodSandbox for \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\" returns successfully" Jul 2 06:56:18.947358 containerd[1789]: time="2024-07-02T06:56:18.947308054Z" level=info msg="RemovePodSandbox for \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\"" Jul 2 06:56:18.947544 containerd[1789]: time="2024-07-02T06:56:18.947369716Z" level=info msg="Forcibly stopping sandbox \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\"" Jul 2 06:56:18.948410 containerd[1789]: time="2024-07-02T06:56:18.948369713Z" level=info msg="TearDown network for sandbox \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\" successfully" Jul 2 06:56:18.961422 containerd[1789]: time="2024-07-02T06:56:18.961370465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:56:18.961805 containerd[1789]: time="2024-07-02T06:56:18.961779173Z" level=info msg="RemovePodSandbox \"55f3e1d4a463d67ed486d52b86b240d3820d5663a9e433e4f0127ce76f1590f1\" returns successfully" Jul 2 06:56:18.962333 containerd[1789]: time="2024-07-02T06:56:18.962305566Z" level=info msg="StopPodSandbox for \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\"" Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.035 [WARNING][5510] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163", Pod:"csi-node-driver-j88n9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid65d9c2bb33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.035 [INFO][5510] k8s.go 608: Cleaning up netns ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.036 [INFO][5510] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" iface="eth0" netns="" Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.036 [INFO][5510] k8s.go 615: Releasing IP address(es) ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.036 [INFO][5510] utils.go 188: Calico CNI releasing IP address ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.081 [INFO][5516] ipam_plugin.go 411: Releasing address using handleID ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" HandleID="k8s-pod-network.187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.081 [INFO][5516] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.082 [INFO][5516] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.094 [WARNING][5516] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" HandleID="k8s-pod-network.187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.094 [INFO][5516] ipam_plugin.go 439: Releasing address using workloadID ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" HandleID="k8s-pod-network.187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.095 [INFO][5516] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:19.099887 containerd[1789]: 2024-07-02 06:56:19.097 [INFO][5510] k8s.go 621: Teardown processing complete. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:19.100751 containerd[1789]: time="2024-07-02T06:56:19.100711416Z" level=info msg="TearDown network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\" successfully" Jul 2 06:56:19.100880 containerd[1789]: time="2024-07-02T06:56:19.100857868Z" level=info msg="StopPodSandbox for \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\" returns successfully" Jul 2 06:56:19.101420 containerd[1789]: time="2024-07-02T06:56:19.101395986Z" level=info msg="RemovePodSandbox for \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\"" Jul 2 06:56:19.101633 containerd[1789]: time="2024-07-02T06:56:19.101580808Z" level=info msg="Forcibly stopping sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\"" Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.152 [WARNING][5535] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd0fcf2c-1a69-4f59-9dc5-d51372ca28c8", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"39358adec86259537af495971e2255e7b7bf7dccc42770e9338ac212522b8163", Pod:"csi-node-driver-j88n9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid65d9c2bb33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.153 [INFO][5535] k8s.go 608: Cleaning up netns ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.153 [INFO][5535] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" iface="eth0" netns="" Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.153 [INFO][5535] k8s.go 615: Releasing IP address(es) ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.153 [INFO][5535] utils.go 188: Calico CNI releasing IP address ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.191 [INFO][5542] ipam_plugin.go 411: Releasing address using handleID ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" HandleID="k8s-pod-network.187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.192 [INFO][5542] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.192 [INFO][5542] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.201 [WARNING][5542] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" HandleID="k8s-pod-network.187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.201 [INFO][5542] ipam_plugin.go 439: Releasing address using workloadID ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" HandleID="k8s-pod-network.187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Workload="ip--172--31--18--4-k8s-csi--node--driver--j88n9-eth0" Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.203 [INFO][5542] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:19.207348 containerd[1789]: 2024-07-02 06:56:19.204 [INFO][5535] k8s.go 621: Teardown processing complete. ContainerID="187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493" Jul 2 06:56:19.208093 containerd[1789]: time="2024-07-02T06:56:19.208054729Z" level=info msg="TearDown network for sandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\" successfully" Jul 2 06:56:19.231019 containerd[1789]: time="2024-07-02T06:56:19.230951670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:56:19.234518 containerd[1789]: time="2024-07-02T06:56:19.234448062Z" level=info msg="RemovePodSandbox \"187cfcfa4f76fb410bef0cf491e96330f53d5b2791ae375b42683d900719b493\" returns successfully" Jul 2 06:56:19.359689 sshd[5403]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:19.366000 audit[5403]: USER_END pid=5403 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:19.366000 audit[5403]: CRED_DISP pid=5403 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:19.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.4:22-139.178.89.65:38210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:19.369844 systemd[1]: sshd@9-172.31.18.4:22-139.178.89.65:38210.service: Deactivated successfully. Jul 2 06:56:19.371079 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 06:56:19.372761 systemd-logind[1779]: Session 10 logged out. Waiting for processes to exit. Jul 2 06:56:19.374295 systemd-logind[1779]: Removed session 10. Jul 2 06:56:19.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.4:22-139.178.89.65:38224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:19.394058 systemd[1]: Started sshd@10-172.31.18.4:22-139.178.89.65:38224.service - OpenSSH per-connection server daemon (139.178.89.65:38224). Jul 2 06:56:19.546000 audit[5552]: USER_ACCT pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:19.547875 sshd[5552]: Accepted publickey for core from 139.178.89.65 port 38224 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:19.547000 audit[5552]: CRED_ACQ pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:19.547000 audit[5552]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd760f7600 a2=3 a3=7f0ed894f480 items=0 ppid=1 pid=5552 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:19.547000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:19.549575 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:19.555581 systemd-logind[1779]: New session 11 of user core. Jul 2 06:56:19.564855 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 06:56:19.569000 audit[5552]: USER_START pid=5552 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:19.571000 audit[5554]: CRED_ACQ pid=5554 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:19.814876 sshd[5552]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:19.816000 audit[5552]: USER_END pid=5552 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:19.816000 audit[5552]: CRED_DISP pid=5552 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:19.820568 systemd-logind[1779]: Session 11 logged out. Waiting for processes to exit. Jul 2 06:56:19.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.4:22-139.178.89.65:38224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:19.821666 systemd[1]: sshd@10-172.31.18.4:22-139.178.89.65:38224.service: Deactivated successfully. Jul 2 06:56:19.822689 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 06:56:19.824442 systemd-logind[1779]: Removed session 11. Jul 2 06:56:19.850561 systemd[1]: Started sshd@11-172.31.18.4:22-139.178.89.65:38236.service - OpenSSH per-connection server daemon (139.178.89.65:38236). Jul 2 06:56:19.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.4:22-139.178.89.65:38236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:20.004000 audit[5562]: USER_ACCT pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:20.006478 sshd[5562]: Accepted publickey for core from 139.178.89.65 port 38236 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:20.006000 audit[5562]: CRED_ACQ pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:20.006000 audit[5562]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdf504990 a2=3 a3=7f147a839480 items=0 ppid=1 pid=5562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:20.006000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:20.009010 sshd[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:20.014319 systemd-logind[1779]: New session 12 of user core. Jul 2 06:56:20.017739 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 06:56:20.022000 audit[5562]: USER_START pid=5562 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:20.024000 audit[5564]: CRED_ACQ pid=5564 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:20.271595 sshd[5562]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:20.271000 audit[5562]: USER_END pid=5562 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:20.271000 audit[5562]: CRED_DISP pid=5562 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:20.275087 systemd[1]: sshd@11-172.31.18.4:22-139.178.89.65:38236.service: Deactivated successfully. Jul 2 06:56:20.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.4:22-139.178.89.65:38236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:20.276171 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 06:56:20.277120 systemd-logind[1779]: Session 12 logged out. Waiting for processes to exit. Jul 2 06:56:20.278562 systemd-logind[1779]: Removed session 12. Jul 2 06:56:21.474074 systemd[1]: run-containerd-runc-k8s.io-aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8-runc.jI9CAK.mount: Deactivated successfully. Jul 2 06:56:25.309297 systemd[1]: Started sshd@12-172.31.18.4:22-139.178.89.65:38248.service - OpenSSH per-connection server daemon (139.178.89.65:38248). Jul 2 06:56:25.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.4:22-139.178.89.65:38248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:25.314097 kernel: kauditd_printk_skb: 35 callbacks suppressed Jul 2 06:56:25.314235 kernel: audit: type=1130 audit(1719903385.310:688): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.4:22-139.178.89.65:38248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:25.472000 audit[5603]: USER_ACCT pid=5603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.473792 sshd[5603]: Accepted publickey for core from 139.178.89.65 port 38248 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:25.476609 kernel: audit: type=1101 audit(1719903385.472:689): pid=5603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.487540 kernel: audit: type=1103 audit(1719903385.477:690): pid=5603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.487743 kernel: audit: type=1006 audit(1719903385.477:691): pid=5603 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 2 06:56:25.487809 kernel: audit: type=1300 audit(1719903385.477:691): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed7badf90 a2=3 a3=7eff73076480 items=0 ppid=1 pid=5603 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:25.487952 kernel: audit: type=1327 audit(1719903385.477:691): proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:25.477000 audit[5603]: CRED_ACQ pid=5603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.477000 audit[5603]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed7badf90 a2=3 a3=7eff73076480 items=0 ppid=1 pid=5603 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:25.477000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:25.478305 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:25.493047 systemd-logind[1779]: New session 13 of user core. Jul 2 06:56:25.497825 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 06:56:25.505000 audit[5603]: USER_START pid=5603 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.509525 kernel: audit: type=1105 audit(1719903385.505:692): pid=5603 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.509000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.513603 kernel: audit: type=1103 audit(1719903385.509:693): pid=5605 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.752831 sshd[5603]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:25.753000 audit[5603]: USER_END pid=5603 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.756000 audit[5603]: CRED_DISP pid=5603 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.760521 kernel: audit: type=1106 audit(1719903385.753:694): pid=5603 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.760633 kernel: audit: type=1104 audit(1719903385.756:695): pid=5603 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:25.768925 systemd[1]: sshd@12-172.31.18.4:22-139.178.89.65:38248.service: Deactivated successfully. Jul 2 06:56:25.770536 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 06:56:25.770656 systemd-logind[1779]: Session 13 logged out. Waiting for processes to exit. Jul 2 06:56:25.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.4:22-139.178.89.65:38248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:25.772273 systemd-logind[1779]: Removed session 13. Jul 2 06:56:28.767702 update_engine[1780]: I0702 06:56:28.767578 1780 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 06:56:28.767702 update_engine[1780]: I0702 06:56:28.767643 1780 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 06:56:28.771098 update_engine[1780]: I0702 06:56:28.770962 1780 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 06:56:28.772188 update_engine[1780]: I0702 06:56:28.772161 1780 omaha_request_params.cc:62] Current group set to stable Jul 2 06:56:28.772677 update_engine[1780]: I0702 06:56:28.772497 1780 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 06:56:28.772782 update_engine[1780]: I0702 06:56:28.772769 1780 update_attempter.cc:643] Scheduling an action processor start. Jul 2 06:56:28.773021 update_engine[1780]: I0702 06:56:28.773001 1780 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 06:56:28.773221 update_engine[1780]: I0702 06:56:28.773142 1780 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 06:56:28.773379 update_engine[1780]: I0702 06:56:28.773363 1780 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 06:56:28.773451 update_engine[1780]: I0702 06:56:28.773439 1780 omaha_request_action.cc:272] Request: Jul 2 06:56:28.773451 update_engine[1780]: Jul 2 06:56:28.773451 update_engine[1780]: Jul 2 06:56:28.773451 update_engine[1780]: Jul 2 06:56:28.773451 update_engine[1780]: Jul 2 06:56:28.773451 update_engine[1780]: Jul 2 06:56:28.773451 update_engine[1780]: Jul 2 06:56:28.773451 update_engine[1780]: Jul 2 06:56:28.773451 update_engine[1780]: Jul 2 06:56:28.776634 update_engine[1780]: I0702 06:56:28.776616 1780 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 06:56:28.786119 update_engine[1780]: I0702 06:56:28.786080 1780 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 06:56:28.786772 update_engine[1780]: I0702 06:56:28.786666 1780 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 06:56:28.798876 update_engine[1780]: E0702 06:56:28.798836 1780 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 06:56:28.799217 update_engine[1780]: I0702 06:56:28.799197 1780 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 06:56:28.812813 locksmithd[1804]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 06:56:30.800327 systemd[1]: Started sshd@13-172.31.18.4:22-139.178.89.65:58394.service - OpenSSH per-connection server daemon (139.178.89.65:58394). Jul 2 06:56:30.804813 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:56:30.804947 kernel: audit: type=1130 audit(1719903390.800:697): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.4:22-139.178.89.65:58394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:30.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.4:22-139.178.89.65:58394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:30.975042 kernel: audit: type=1101 audit(1719903390.966:698): pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:30.975156 kernel: audit: type=1103 audit(1719903390.968:699): pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:30.975281 kernel: audit: type=1006 audit(1719903390.968:700): pid=5621 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 2 06:56:30.966000 audit[5621]: USER_ACCT pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:30.968000 audit[5621]: CRED_ACQ pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:30.969223 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:30.975915 sshd[5621]: Accepted publickey for core from 139.178.89.65 port 58394 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:30.968000 audit[5621]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe03457e20 a2=3 a3=7fc956f52480 items=0 ppid=1 pid=5621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:30.968000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:30.980118 kernel: audit: type=1300 audit(1719903390.968:700): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe03457e20 a2=3 a3=7fc956f52480 items=0 ppid=1 pid=5621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:30.980344 kernel: audit: type=1327 audit(1719903390.968:700): proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:30.986598 systemd-logind[1779]: New session 14 of user core. Jul 2 06:56:30.990802 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 06:56:31.001000 audit[5621]: USER_START pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:31.006530 kernel: audit: type=1105 audit(1719903391.001:701): pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:31.005000 audit[5623]: CRED_ACQ pid=5623 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:31.009627 kernel: audit: type=1103 audit(1719903391.005:702): pid=5623 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:31.354548 sshd[5621]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:31.358000 audit[5621]: USER_END pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:31.363588 kernel: audit: type=1106 audit(1719903391.358:703): pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:31.363000 audit[5621]: CRED_DISP pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:31.370544 kernel: audit: type=1104 audit(1719903391.363:704): pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:31.378019 systemd[1]: sshd@13-172.31.18.4:22-139.178.89.65:58394.service: Deactivated successfully. Jul 2 06:56:31.379294 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 06:56:31.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.4:22-139.178.89.65:58394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:31.381360 systemd-logind[1779]: Session 14 logged out. Waiting for processes to exit. Jul 2 06:56:31.383008 systemd-logind[1779]: Removed session 14. Jul 2 06:56:32.568967 systemd[1]: run-containerd-runc-k8s.io-ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31-runc.Zkpujf.mount: Deactivated successfully. Jul 2 06:56:36.398384 systemd[1]: Started sshd@14-172.31.18.4:22-139.178.89.65:58406.service - OpenSSH per-connection server daemon (139.178.89.65:58406). Jul 2 06:56:36.407570 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:56:36.407688 kernel: audit: type=1130 audit(1719903396.397:706): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.4:22-139.178.89.65:58406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:36.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.4:22-139.178.89.65:58406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:36.601204 kernel: audit: type=1101 audit(1719903396.592:707): pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.601304 kernel: audit: type=1103 audit(1719903396.592:708): pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.601330 kernel: audit: type=1006 audit(1719903396.592:709): pid=5662 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 2 06:56:36.592000 audit[5662]: USER_ACCT pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.604396 kernel: audit: type=1300 audit(1719903396.592:709): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4599cdb0 a2=3 a3=7fc1db174480 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:36.592000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.592000 audit[5662]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4599cdb0 a2=3 a3=7fc1db174480 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:36.600029 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:36.604890 sshd[5662]: Accepted publickey for core from 139.178.89.65 port 58406 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:36.592000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:36.607599 kernel: audit: type=1327 audit(1719903396.592:709): proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:36.608997 systemd-logind[1779]: New session 15 of user core. Jul 2 06:56:36.613734 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 06:56:36.620000 audit[5662]: USER_START pid=5662 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.626937 kernel: audit: type=1105 audit(1719903396.620:710): pid=5662 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.627054 kernel: audit: type=1103 audit(1719903396.624:711): pid=5664 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.624000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.929944 sshd[5662]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:36.936000 audit[5662]: USER_END pid=5662 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.944655 kernel: audit: type=1106 audit(1719903396.936:712): pid=5662 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.944851 kernel: audit: type=1104 audit(1719903396.937:713): pid=5662 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.937000 audit[5662]: CRED_DISP pid=5662 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:36.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.4:22-139.178.89.65:58406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:36.946872 systemd[1]: sshd@14-172.31.18.4:22-139.178.89.65:58406.service: Deactivated successfully. Jul 2 06:56:36.947869 systemd-logind[1779]: Session 15 logged out. Waiting for processes to exit. Jul 2 06:56:36.948379 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 06:56:36.951248 systemd-logind[1779]: Removed session 15. Jul 2 06:56:37.876052 kubelet[3113]: I0702 06:56:37.875998 3113 topology_manager.go:215] "Topology Admit Handler" podUID="561c7f23-bf41-4513-9136-dbe2410d719d" podNamespace="calico-apiserver" podName="calico-apiserver-7c86f666f8-rj9gp" Jul 2 06:56:37.890872 systemd[1]: Created slice kubepods-besteffort-pod561c7f23_bf41_4513_9136_dbe2410d719d.slice - libcontainer container kubepods-besteffort-pod561c7f23_bf41_4513_9136_dbe2410d719d.slice. Jul 2 06:56:37.947000 audit[5674]: NETFILTER_CFG table=filter:121 family=2 entries=9 op=nft_register_rule pid=5674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:37.947000 audit[5674]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc058282e0 a2=0 a3=7ffc058282cc items=0 ppid=3284 pid=5674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:37.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:37.949000 audit[5674]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=5674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:37.949000 audit[5674]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc058282e0 a2=0 a3=7ffc058282cc items=0 ppid=3284 pid=5674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:37.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:37.968000 audit[5676]: NETFILTER_CFG table=filter:123 family=2 entries=10 op=nft_register_rule pid=5676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:37.968000 audit[5676]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdac74be90 a2=0 a3=7ffdac74be7c items=0 ppid=3284 pid=5676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:37.968000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:37.971000 audit[5676]: NETFILTER_CFG table=nat:124 family=2 entries=20 op=nft_register_rule pid=5676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:37.971000 audit[5676]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdac74be90 a2=0 a3=7ffdac74be7c items=0 ppid=3284 pid=5676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:37.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:38.009537 kubelet[3113]: I0702 06:56:38.009477 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/561c7f23-bf41-4513-9136-dbe2410d719d-calico-apiserver-certs\") pod \"calico-apiserver-7c86f666f8-rj9gp\" (UID: \"561c7f23-bf41-4513-9136-dbe2410d719d\") " pod="calico-apiserver/calico-apiserver-7c86f666f8-rj9gp" Jul 2 06:56:38.009792 kubelet[3113]: I0702 06:56:38.009586 3113 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bps72\" (UniqueName: \"kubernetes.io/projected/561c7f23-bf41-4513-9136-dbe2410d719d-kube-api-access-bps72\") pod \"calico-apiserver-7c86f666f8-rj9gp\" (UID: \"561c7f23-bf41-4513-9136-dbe2410d719d\") " pod="calico-apiserver/calico-apiserver-7c86f666f8-rj9gp" Jul 2 06:56:38.117758 kubelet[3113]: E0702 06:56:38.110669 3113 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 06:56:38.139041 kubelet[3113]: E0702 06:56:38.138917 3113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/561c7f23-bf41-4513-9136-dbe2410d719d-calico-apiserver-certs podName:561c7f23-bf41-4513-9136-dbe2410d719d nodeName:}" failed. No retries permitted until 2024-07-02 06:56:38.627609118 +0000 UTC m=+80.918480959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/561c7f23-bf41-4513-9136-dbe2410d719d-calico-apiserver-certs") pod "calico-apiserver-7c86f666f8-rj9gp" (UID: "561c7f23-bf41-4513-9136-dbe2410d719d") : secret "calico-apiserver-certs" not found Jul 2 06:56:38.687867 update_engine[1780]: I0702 06:56:38.687811 1780 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 06:56:38.688341 update_engine[1780]: I0702 06:56:38.688122 1780 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 06:56:38.688341 update_engine[1780]: I0702 06:56:38.688336 1780 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 06:56:38.689230 update_engine[1780]: E0702 06:56:38.689201 1780 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 06:56:38.689343 update_engine[1780]: I0702 06:56:38.689321 1780 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 06:56:38.803298 containerd[1789]: time="2024-07-02T06:56:38.803228686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c86f666f8-rj9gp,Uid:561c7f23-bf41-4513-9136-dbe2410d719d,Namespace:calico-apiserver,Attempt:0,}" Jul 2 06:56:39.154531 systemd-networkd[1514]: calid8cc2f01ace: Link UP Jul 2 06:56:39.157299 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 06:56:39.157411 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid8cc2f01ace: link becomes ready Jul 2 06:56:39.157710 systemd-networkd[1514]: calid8cc2f01ace: Gained carrier Jul 2 06:56:39.158674 (udev-worker)[5698]: Network interface NamePolicy= disabled on kernel command line. Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:38.968 [INFO][5679] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0 calico-apiserver-7c86f666f8- calico-apiserver 561c7f23-bf41-4513-9136-dbe2410d719d 1079 0 2024-07-02 06:56:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c86f666f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-4 calico-apiserver-7c86f666f8-rj9gp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid8cc2f01ace [] []}} ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Namespace="calico-apiserver" Pod="calico-apiserver-7c86f666f8-rj9gp" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:38.972 [INFO][5679] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Namespace="calico-apiserver" Pod="calico-apiserver-7c86f666f8-rj9gp" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.076 [INFO][5691] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" HandleID="k8s-pod-network.49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Workload="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.092 [INFO][5691] ipam_plugin.go 264: Auto assigning IP ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" HandleID="k8s-pod-network.49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Workload="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003180d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-4", "pod":"calico-apiserver-7c86f666f8-rj9gp", "timestamp":"2024-07-02 06:56:39.076146993 +0000 UTC"}, Hostname:"ip-172-31-18-4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.092 [INFO][5691] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.092 [INFO][5691] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.092 [INFO][5691] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-4' Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.095 [INFO][5691] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" host="ip-172-31-18-4" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.110 [INFO][5691] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-4" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.119 [INFO][5691] ipam.go 489: Trying affinity for 192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.126 [INFO][5691] ipam.go 155: Attempting to load block cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.130 [INFO][5691] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="ip-172-31-18-4" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.130 [INFO][5691] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" host="ip-172-31-18-4" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.132 [INFO][5691] ipam.go 1685: Creating new handle: k8s-pod-network.49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.137 [INFO][5691] ipam.go 1203: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" host="ip-172-31-18-4" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.146 [INFO][5691] ipam.go 1216: Successfully claimed IPs: [192.168.13.133/26] block=192.168.13.128/26 handle="k8s-pod-network.49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" host="ip-172-31-18-4" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.146 [INFO][5691] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.133/26] handle="k8s-pod-network.49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" host="ip-172-31-18-4" Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.147 [INFO][5691] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:56:39.192201 containerd[1789]: 2024-07-02 06:56:39.147 [INFO][5691] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.13.133/26] IPv6=[] ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" HandleID="k8s-pod-network.49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Workload="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" Jul 2 06:56:39.193862 containerd[1789]: 2024-07-02 06:56:39.150 [INFO][5679] k8s.go 386: Populated endpoint ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Namespace="calico-apiserver" Pod="calico-apiserver-7c86f666f8-rj9gp" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0", GenerateName:"calico-apiserver-7c86f666f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"561c7f23-bf41-4513-9136-dbe2410d719d", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c86f666f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"", Pod:"calico-apiserver-7c86f666f8-rj9gp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8cc2f01ace", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:39.193862 containerd[1789]: 2024-07-02 06:56:39.151 [INFO][5679] k8s.go 387: Calico CNI using IPs: [192.168.13.133/32] ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Namespace="calico-apiserver" Pod="calico-apiserver-7c86f666f8-rj9gp" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" Jul 2 06:56:39.193862 containerd[1789]: 2024-07-02 06:56:39.151 [INFO][5679] dataplane_linux.go 68: Setting the host side veth name to calid8cc2f01ace ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Namespace="calico-apiserver" Pod="calico-apiserver-7c86f666f8-rj9gp" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" Jul 2 06:56:39.193862 containerd[1789]: 2024-07-02 06:56:39.162 [INFO][5679] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Namespace="calico-apiserver" Pod="calico-apiserver-7c86f666f8-rj9gp" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" Jul 2 06:56:39.193862 containerd[1789]: 2024-07-02 06:56:39.163 [INFO][5679] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Namespace="calico-apiserver" Pod="calico-apiserver-7c86f666f8-rj9gp" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0", GenerateName:"calico-apiserver-7c86f666f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"561c7f23-bf41-4513-9136-dbe2410d719d", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c86f666f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-4", ContainerID:"49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e", Pod:"calico-apiserver-7c86f666f8-rj9gp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8cc2f01ace", MAC:"ba:68:dc:cf:62:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:56:39.193862 containerd[1789]: 2024-07-02 06:56:39.185 [INFO][5679] k8s.go 500: Wrote updated endpoint to datastore ContainerID="49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e" Namespace="calico-apiserver" Pod="calico-apiserver-7c86f666f8-rj9gp" WorkloadEndpoint="ip--172--31--18--4-k8s-calico--apiserver--7c86f666f8--rj9gp-eth0" Jul 2 06:56:39.303338 containerd[1789]: time="2024-07-02T06:56:39.290479195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:39.303547 containerd[1789]: time="2024-07-02T06:56:39.303393897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:39.303547 containerd[1789]: time="2024-07-02T06:56:39.303523312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:39.303671 containerd[1789]: time="2024-07-02T06:56:39.303594521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:39.374000 audit[5734]: NETFILTER_CFG table=filter:125 family=2 entries=61 op=nft_register_chain pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:56:39.374000 audit[5734]: SYSCALL arch=c000003e syscall=46 success=yes exit=30316 a0=3 a1=7ffd9ffd28a0 a2=0 a3=7ffd9ffd288c items=0 ppid=4412 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:39.374000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:56:39.384992 systemd[1]: run-containerd-runc-k8s.io-49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e-runc.34kZIB.mount: Deactivated successfully. Jul 2 06:56:39.396727 systemd[1]: Started cri-containerd-49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e.scope - libcontainer container 49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e. Jul 2 06:56:39.431000 audit: BPF prog-id=188 op=LOAD Jul 2 06:56:39.432000 audit: BPF prog-id=189 op=LOAD Jul 2 06:56:39.432000 audit[5729]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=5718 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:39.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383136616464653035356166393333343966663561336633306131 Jul 2 06:56:39.432000 audit: BPF prog-id=190 op=LOAD Jul 2 06:56:39.432000 audit[5729]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=5718 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:39.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383136616464653035356166393333343966663561336633306131 Jul 2 06:56:39.432000 audit: BPF prog-id=190 op=UNLOAD Jul 2 06:56:39.432000 audit: BPF prog-id=189 op=UNLOAD Jul 2 06:56:39.432000 audit: BPF prog-id=191 op=LOAD Jul 2 06:56:39.432000 audit[5729]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=5718 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:39.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383136616464653035356166393333343966663561336633306131 Jul 2 06:56:39.500091 containerd[1789]: time="2024-07-02T06:56:39.498248300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c86f666f8-rj9gp,Uid:561c7f23-bf41-4513-9136-dbe2410d719d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e\"" Jul 2 06:56:39.502964 containerd[1789]: time="2024-07-02T06:56:39.502807537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 06:56:41.112621 systemd-networkd[1514]: calid8cc2f01ace: Gained IPv6LL Jul 2 06:56:41.965278 kernel: kauditd_printk_skb: 28 callbacks suppressed Jul 2 06:56:41.965436 kernel: audit: type=1130 audit(1719903401.960:726): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.4:22-139.178.89.65:45838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:41.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.4:22-139.178.89.65:45838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:41.961746 systemd[1]: Started sshd@15-172.31.18.4:22-139.178.89.65:45838.service - OpenSSH per-connection server daemon (139.178.89.65:45838). Jul 2 06:56:42.173000 audit[5759]: USER_ACCT pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:42.178706 kernel: audit: type=1101 audit(1719903402.173:727): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:42.178855 sshd[5759]: Accepted publickey for core from 139.178.89.65 port 45838 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:42.173000 audit[5759]: CRED_ACQ pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:42.179513 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:42.182521 kernel: audit: type=1103 audit(1719903402.173:728): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:42.187769 kernel: audit: type=1006 audit(1719903402.173:729): pid=5759 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 2 06:56:42.187884 kernel: audit: type=1300 audit(1719903402.173:729): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea719d9d0 a2=3 a3=7f68f1d5f480 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:42.173000 audit[5759]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea719d9d0 a2=3 a3=7f68f1d5f480 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:42.173000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:42.189594 kernel: audit: type=1327 audit(1719903402.173:729): proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:42.193454 systemd-logind[1779]: New session 16 of user core. Jul 2 06:56:42.196744 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 06:56:42.205000 audit[5759]: USER_START pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:42.216048 kernel: audit: type=1105 audit(1719903402.205:730): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:42.219944 kernel: audit: type=1103 audit(1719903402.215:731): pid=5761 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:42.215000 audit[5761]: CRED_ACQ pid=5761 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:42.415122 systemd[1]: run-containerd-runc-k8s.io-aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8-runc.Is8SuY.mount: Deactivated successfully. Jul 2 06:56:43.273223 sshd[5759]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:43.274000 audit[5759]: USER_END pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:43.274000 audit[5759]: CRED_DISP pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:43.279950 systemd[1]: sshd@15-172.31.18.4:22-139.178.89.65:45838.service: Deactivated successfully. Jul 2 06:56:43.283031 kernel: audit: type=1106 audit(1719903403.274:732): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:43.283088 kernel: audit: type=1104 audit(1719903403.274:733): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:43.281032 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 06:56:43.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.4:22-139.178.89.65:45838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.286359 systemd-logind[1779]: Session 16 logged out. Waiting for processes to exit. Jul 2 06:56:43.292126 systemd-logind[1779]: Removed session 16. Jul 2 06:56:43.308034 systemd[1]: Started sshd@16-172.31.18.4:22-139.178.89.65:45842.service - OpenSSH per-connection server daemon (139.178.89.65:45842). Jul 2 06:56:43.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.4:22-139.178.89.65:45842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.481000 audit[5804]: USER_ACCT pid=5804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:43.483703 sshd[5804]: Accepted publickey for core from 139.178.89.65 port 45842 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:43.483000 audit[5804]: CRED_ACQ pid=5804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:43.483000 audit[5804]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd77aa0f0 a2=3 a3=7f5184c1c480 items=0 ppid=1 pid=5804 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:43.483000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:43.485786 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:43.494391 systemd-logind[1779]: New session 17 of user core. Jul 2 06:56:43.496706 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 06:56:43.506000 audit[5804]: USER_START pid=5804 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:43.508000 audit[5806]: CRED_ACQ pid=5806 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:44.384777 sshd[5804]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:44.385000 audit[5804]: USER_END pid=5804 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:44.386000 audit[5804]: CRED_DISP pid=5804 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:44.390723 systemd[1]: sshd@16-172.31.18.4:22-139.178.89.65:45842.service: Deactivated successfully. Jul 2 06:56:44.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.4:22-139.178.89.65:45842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:44.392053 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 06:56:44.394304 systemd-logind[1779]: Session 17 logged out. Waiting for processes to exit. Jul 2 06:56:44.396813 systemd-logind[1779]: Removed session 17. Jul 2 06:56:44.422685 systemd[1]: Started sshd@17-172.31.18.4:22-139.178.89.65:45858.service - OpenSSH per-connection server daemon (139.178.89.65:45858). Jul 2 06:56:44.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.4:22-139.178.89.65:45858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:44.620000 audit[5816]: USER_ACCT pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:44.628000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:44.629000 audit[5816]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe098b3e50 a2=3 a3=7fbd74d40480 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:44.629000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:44.632267 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:44.633565 sshd[5816]: Accepted publickey for core from 139.178.89.65 port 45858 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:44.647566 systemd-logind[1779]: New session 18 of user core. Jul 2 06:56:44.653181 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 06:56:44.663000 audit[5816]: USER_START pid=5816 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:44.666000 audit[5818]: CRED_ACQ pid=5818 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:45.734525 containerd[1789]: time="2024-07-02T06:56:45.733433377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 06:56:45.787356 containerd[1789]: time="2024-07-02T06:56:45.787288827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 6.284041291s" Jul 2 06:56:45.787692 containerd[1789]: time="2024-07-02T06:56:45.787358909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 06:56:45.802329 containerd[1789]: time="2024-07-02T06:56:45.802275359Z" level=info msg="CreateContainer within sandbox \"49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 06:56:45.822949 containerd[1789]: time="2024-07-02T06:56:45.822899302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:45.849466 containerd[1789]: time="2024-07-02T06:56:45.849412007Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:45.850814 containerd[1789]: time="2024-07-02T06:56:45.850757079Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:45.852082 containerd[1789]: time="2024-07-02T06:56:45.852046732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:45.888837 containerd[1789]: time="2024-07-02T06:56:45.888784688Z" level=info msg="CreateContainer within sandbox \"49816adde055af93349ff5a3f30a160f60f880217762c426850813e1ed997a8e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9331edee8a28f3cbb09eb0c9917cdee6f801d7585486f0fd22ae26a938665aaf\"" Jul 2 06:56:45.890293 containerd[1789]: time="2024-07-02T06:56:45.890259190Z" level=info msg="StartContainer for \"9331edee8a28f3cbb09eb0c9917cdee6f801d7585486f0fd22ae26a938665aaf\"" Jul 2 06:56:46.121734 systemd[1]: Started cri-containerd-9331edee8a28f3cbb09eb0c9917cdee6f801d7585486f0fd22ae26a938665aaf.scope - libcontainer container 9331edee8a28f3cbb09eb0c9917cdee6f801d7585486f0fd22ae26a938665aaf. Jul 2 06:56:46.144659 systemd[1]: run-containerd-runc-k8s.io-9331edee8a28f3cbb09eb0c9917cdee6f801d7585486f0fd22ae26a938665aaf-runc.BAto2D.mount: Deactivated successfully. Jul 2 06:56:46.164000 audit: BPF prog-id=192 op=LOAD Jul 2 06:56:46.165000 audit: BPF prog-id=193 op=LOAD Jul 2 06:56:46.165000 audit[5861]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5718 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:46.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333165646565386132386633636262303965623063393931376364 Jul 2 06:56:46.165000 audit: BPF prog-id=194 op=LOAD Jul 2 06:56:46.165000 audit[5861]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5718 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:46.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333165646565386132386633636262303965623063393931376364 Jul 2 06:56:46.165000 audit: BPF prog-id=194 op=UNLOAD Jul 2 06:56:46.166000 audit: BPF prog-id=193 op=UNLOAD Jul 2 06:56:46.166000 audit: BPF prog-id=195 op=LOAD Jul 2 06:56:46.166000 audit[5861]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5718 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:46.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333165646565386132386633636262303965623063393931376364 Jul 2 06:56:46.209935 containerd[1789]: time="2024-07-02T06:56:46.209880294Z" level=info msg="StartContainer for \"9331edee8a28f3cbb09eb0c9917cdee6f801d7585486f0fd22ae26a938665aaf\" returns successfully" Jul 2 06:56:46.952000 audit[5892]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=5892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:46.952000 audit[5892]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffe041e570 a2=0 a3=7fffe041e55c items=0 ppid=3284 pid=5892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:46.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:46.967570 kernel: kauditd_printk_skb: 35 callbacks suppressed Jul 2 06:56:46.967874 kernel: audit: type=1325 audit(1719903406.960:757): table=nat:127 family=2 entries=20 op=nft_register_rule pid=5892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:46.960000 audit[5892]: NETFILTER_CFG table=nat:127 family=2 entries=20 op=nft_register_rule pid=5892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:46.960000 audit[5892]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffe041e570 a2=0 a3=7fffe041e55c items=0 ppid=3284 pid=5892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:46.972115 kernel: audit: type=1300 audit(1719903406.960:757): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffe041e570 a2=0 a3=7fffe041e55c items=0 ppid=3284 pid=5892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:46.972227 kernel: audit: type=1327 audit(1719903406.960:757): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:46.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:47.368000 audit[5895]: NETFILTER_CFG table=filter:128 family=2 entries=10 op=nft_register_rule pid=5895 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:47.371514 kernel: audit: type=1325 audit(1719903407.368:758): table=filter:128 family=2 entries=10 op=nft_register_rule pid=5895 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:47.368000 audit[5895]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffc3469370 a2=0 a3=7fffc346935c items=0 ppid=3284 pid=5895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:47.390563 kernel: audit: type=1300 audit(1719903407.368:758): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffc3469370 a2=0 a3=7fffc346935c items=0 ppid=3284 pid=5895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:47.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:47.397336 kernel: audit: type=1327 audit(1719903407.368:758): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:47.398000 audit[5895]: NETFILTER_CFG table=nat:129 family=2 entries=20 op=nft_register_rule pid=5895 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:47.404714 kernel: audit: type=1325 audit(1719903407.398:759): table=nat:129 family=2 entries=20 op=nft_register_rule pid=5895 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:47.404807 kernel: audit: type=1300 audit(1719903407.398:759): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffc3469370 a2=0 a3=7fffc346935c items=0 ppid=3284 pid=5895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:47.398000 audit[5895]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffc3469370 a2=0 a3=7fffc346935c items=0 ppid=3284 pid=5895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:47.398000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:47.407502 kernel: audit: type=1327 audit(1719903407.398:759): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:48.688412 update_engine[1780]: I0702 06:56:48.687635 1780 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 06:56:48.688412 update_engine[1780]: I0702 06:56:48.688086 1780 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 06:56:48.688412 update_engine[1780]: I0702 06:56:48.688356 1780 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 06:56:48.689403 update_engine[1780]: E0702 06:56:48.689260 1780 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 06:56:48.689403 update_engine[1780]: I0702 06:56:48.689378 1780 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 06:56:48.918795 sshd[5816]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:48.925236 kernel: audit: type=1106 audit(1719903408.919:760): pid=5816 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:48.919000 audit[5816]: USER_END pid=5816 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:48.919000 audit[5816]: CRED_DISP pid=5816 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:48.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.4:22-139.178.89.65:45858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.922813 systemd-logind[1779]: Session 18 logged out. Waiting for processes to exit. Jul 2 06:56:48.924820 systemd[1]: sshd@17-172.31.18.4:22-139.178.89.65:45858.service: Deactivated successfully. Jul 2 06:56:48.925947 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 06:56:48.928250 systemd-logind[1779]: Removed session 18. Jul 2 06:56:48.952468 systemd[1]: Started sshd@18-172.31.18.4:22-139.178.89.65:55328.service - OpenSSH per-connection server daemon (139.178.89.65:55328). Jul 2 06:56:48.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.4:22-139.178.89.65:55328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.095635 kubelet[3113]: I0702 06:56:49.075673 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 06:56:49.103000 audit[5901]: NETFILTER_CFG table=filter:130 family=2 entries=22 op=nft_register_rule pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:49.103000 audit[5901]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffd14dd2140 a2=0 a3=7ffd14dd212c items=0 ppid=3284 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:49.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:49.113000 audit[5901]: NETFILTER_CFG table=nat:131 family=2 entries=20 op=nft_register_rule pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:49.113000 audit[5901]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd14dd2140 a2=0 a3=0 items=0 ppid=3284 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:49.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:49.136000 audit[5899]: USER_ACCT pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:49.137363 sshd[5899]: Accepted publickey for core from 139.178.89.65 port 55328 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:49.138000 audit[5899]: CRED_ACQ pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:49.139000 audit[5899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1e1ded40 a2=3 a3=7fe515e61480 items=0 ppid=1 pid=5899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:49.139000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:49.140897 sshd[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:49.159620 systemd-logind[1779]: New session 19 of user core. Jul 2 06:56:49.161705 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 06:56:49.184000 audit[5899]: USER_START pid=5899 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:49.186000 audit[5903]: CRED_ACQ pid=5903 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:49.198000 audit[5904]: NETFILTER_CFG table=filter:132 family=2 entries=34 op=nft_register_rule pid=5904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:49.198000 audit[5904]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7fff8c951a10 a2=0 a3=7fff8c9519fc items=0 ppid=3284 pid=5904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:49.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:49.205000 audit[5904]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:49.205000 audit[5904]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff8c951a10 a2=0 a3=0 items=0 ppid=3284 pid=5904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:49.205000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:50.740069 sshd[5899]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:50.741000 audit[5899]: USER_END pid=5899 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:50.741000 audit[5899]: CRED_DISP pid=5899 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:50.744930 systemd[1]: sshd@18-172.31.18.4:22-139.178.89.65:55328.service: Deactivated successfully. Jul 2 06:56:50.746120 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 06:56:50.747205 systemd-logind[1779]: Session 19 logged out. Waiting for processes to exit. Jul 2 06:56:50.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.4:22-139.178.89.65:55328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.748641 systemd-logind[1779]: Removed session 19. Jul 2 06:56:50.774212 systemd[1]: Started sshd@19-172.31.18.4:22-139.178.89.65:55340.service - OpenSSH per-connection server daemon (139.178.89.65:55340). Jul 2 06:56:50.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.4:22-139.178.89.65:55340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.968000 audit[5915]: USER_ACCT pid=5915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:50.970800 sshd[5915]: Accepted publickey for core from 139.178.89.65 port 55340 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:50.970000 audit[5915]: CRED_ACQ pid=5915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:50.970000 audit[5915]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdab678ac0 a2=3 a3=7f5c7b154480 items=0 ppid=1 pid=5915 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:50.970000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:50.974233 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:50.986636 systemd-logind[1779]: New session 20 of user core. Jul 2 06:56:50.990756 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 06:56:51.004000 audit[5915]: USER_START pid=5915 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:51.007000 audit[5917]: CRED_ACQ pid=5917 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:51.156246 kubelet[3113]: I0702 06:56:51.113317 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c86f666f8-rj9gp" podStartSLOduration=7.76496019 podStartE2EDuration="14.058869743s" podCreationTimestamp="2024-07-02 06:56:37 +0000 UTC" firstStartedPulling="2024-07-02 06:56:39.50065111 +0000 UTC m=+81.791522938" lastFinishedPulling="2024-07-02 06:56:45.794560663 +0000 UTC m=+88.085432491" observedRunningTime="2024-07-02 06:56:46.846238393 +0000 UTC m=+89.137110243" watchObservedRunningTime="2024-07-02 06:56:51.058869743 +0000 UTC m=+93.349741593" Jul 2 06:56:51.223000 audit[5922]: NETFILTER_CFG table=filter:134 family=2 entries=33 op=nft_register_rule pid=5922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:51.223000 audit[5922]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffef4363d20 a2=0 a3=7ffef4363d0c items=0 ppid=3284 pid=5922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:51.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:51.236000 audit[5922]: NETFILTER_CFG table=nat:135 family=2 entries=27 op=nft_register_chain pid=5922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:51.236000 audit[5922]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffef4363d20 a2=0 a3=0 items=0 ppid=3284 pid=5922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:51.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:51.545368 sshd[5915]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:51.556000 audit[5915]: USER_END pid=5915 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:51.556000 audit[5915]: CRED_DISP pid=5915 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:51.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.4:22-139.178.89.65:55340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:51.559858 systemd[1]: sshd@19-172.31.18.4:22-139.178.89.65:55340.service: Deactivated successfully. Jul 2 06:56:51.561535 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 06:56:51.568333 systemd-logind[1779]: Session 20 logged out. Waiting for processes to exit. Jul 2 06:56:51.569693 systemd-logind[1779]: Removed session 20. Jul 2 06:56:51.886787 systemd[1]: run-containerd-runc-k8s.io-aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8-runc.cU6lIu.mount: Deactivated successfully. Jul 2 06:56:56.240577 kernel: kauditd_printk_skb: 42 callbacks suppressed Jul 2 06:56:56.241044 kernel: audit: type=1325 audit(1719903416.233:787): table=filter:136 family=2 entries=20 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:56.241114 kernel: audit: type=1300 audit(1719903416.233:787): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd8364d8f0 a2=0 a3=7ffd8364d8dc items=0 ppid=3284 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:56.233000 audit[5951]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:56.242309 kernel: audit: type=1327 audit(1719903416.233:787): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:56.233000 audit[5951]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd8364d8f0 a2=0 a3=7ffd8364d8dc items=0 ppid=3284 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:56.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:56.237000 audit[5951]: NETFILTER_CFG table=nat:137 family=2 entries=106 op=nft_register_chain pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:56.248690 kernel: audit: type=1325 audit(1719903416.237:788): table=nat:137 family=2 entries=106 op=nft_register_chain pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:56.237000 audit[5951]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd8364d8f0 a2=0 a3=7ffd8364d8dc items=0 ppid=3284 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:56.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:56.255721 kernel: audit: type=1300 audit(1719903416.237:788): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd8364d8f0 a2=0 a3=7ffd8364d8dc items=0 ppid=3284 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:56.255881 kernel: audit: type=1327 audit(1719903416.237:788): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:56.589888 systemd[1]: Started sshd@20-172.31.18.4:22-139.178.89.65:55342.service - OpenSSH per-connection server daemon (139.178.89.65:55342). Jul 2 06:56:56.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.4:22-139.178.89.65:55342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:56.601017 kernel: audit: type=1130 audit(1719903416.591:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.4:22-139.178.89.65:55342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:56.794000 audit[5954]: USER_ACCT pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:56.795177 sshd[5954]: Accepted publickey for core from 139.178.89.65 port 55342 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:56:56.798608 kernel: audit: type=1101 audit(1719903416.794:790): pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:56.799000 audit[5954]: CRED_ACQ pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:56.800672 sshd[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:56.806599 kernel: audit: type=1103 audit(1719903416.799:791): pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:56.809532 kernel: audit: type=1006 audit(1719903416.799:792): pid=5954 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jul 2 06:56:56.799000 audit[5954]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1cb71440 a2=3 a3=7f6a83fad480 items=0 ppid=1 pid=5954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:56.799000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:56.827771 systemd-logind[1779]: New session 21 of user core. Jul 2 06:56:56.844160 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 06:56:56.861000 audit[5954]: USER_START pid=5954 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:56.865000 audit[5956]: CRED_ACQ pid=5956 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:57.112387 sshd[5954]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:57.117000 audit[5954]: USER_END pid=5954 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:57.117000 audit[5954]: CRED_DISP pid=5954 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:56:57.120854 systemd[1]: sshd@20-172.31.18.4:22-139.178.89.65:55342.service: Deactivated successfully. Jul 2 06:56:57.121835 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 06:56:57.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.4:22-139.178.89.65:55342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:57.123255 systemd-logind[1779]: Session 21 logged out. Waiting for processes to exit. Jul 2 06:56:57.125658 systemd-logind[1779]: Removed session 21. Jul 2 06:56:58.686801 update_engine[1780]: I0702 06:56:58.686715 1780 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 06:56:58.687241 update_engine[1780]: I0702 06:56:58.687067 1780 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 06:56:58.687316 update_engine[1780]: I0702 06:56:58.687295 1780 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 06:56:58.688550 update_engine[1780]: E0702 06:56:58.688095 1780 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 06:56:58.688550 update_engine[1780]: I0702 06:56:58.688190 1780 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 06:56:58.688550 update_engine[1780]: I0702 06:56:58.688197 1780 omaha_request_action.cc:617] Omaha request response: Jul 2 06:56:58.688550 update_engine[1780]: E0702 06:56:58.688279 1780 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 2 06:56:58.702571 update_engine[1780]: I0702 06:56:58.702263 1780 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 06:56:58.702571 update_engine[1780]: I0702 06:56:58.702297 1780 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 06:56:58.702571 update_engine[1780]: I0702 06:56:58.702303 1780 update_attempter.cc:306] Processing Done. Jul 2 06:56:58.706251 update_engine[1780]: E0702 06:56:58.706202 1780 update_attempter.cc:619] Update failed. Jul 2 06:56:58.706251 update_engine[1780]: I0702 06:56:58.706243 1780 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 06:56:58.706251 update_engine[1780]: I0702 06:56:58.706250 1780 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 06:56:58.706251 update_engine[1780]: I0702 06:56:58.706255 1780 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 06:56:58.706781 update_engine[1780]: I0702 06:56:58.706344 1780 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 06:56:58.706781 update_engine[1780]: I0702 06:56:58.706374 1780 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 06:56:58.706781 update_engine[1780]: I0702 06:56:58.706378 1780 omaha_request_action.cc:272] Request: Jul 2 06:56:58.706781 update_engine[1780]: Jul 2 06:56:58.706781 update_engine[1780]: Jul 2 06:56:58.706781 update_engine[1780]: Jul 2 06:56:58.706781 update_engine[1780]: Jul 2 06:56:58.706781 update_engine[1780]: Jul 2 06:56:58.706781 update_engine[1780]: Jul 2 06:56:58.706781 update_engine[1780]: I0702 06:56:58.706384 1780 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 06:56:58.706781 update_engine[1780]: I0702 06:56:58.706661 1780 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 06:56:58.707265 update_engine[1780]: I0702 06:56:58.706882 1780 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 06:56:58.708518 update_engine[1780]: E0702 06:56:58.708475 1780 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 06:56:58.708753 update_engine[1780]: I0702 06:56:58.708738 1780 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 06:56:58.708837 update_engine[1780]: I0702 06:56:58.708827 1780 omaha_request_action.cc:617] Omaha request response: Jul 2 06:56:58.708905 update_engine[1780]: I0702 06:56:58.708895 1780 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 06:56:58.708966 update_engine[1780]: I0702 06:56:58.708957 1780 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 06:56:58.709029 update_engine[1780]: I0702 06:56:58.709019 1780 update_attempter.cc:306] Processing Done. Jul 2 06:56:58.709095 update_engine[1780]: I0702 06:56:58.709084 1780 update_attempter.cc:310] Error event sent. Jul 2 06:56:58.709334 update_engine[1780]: I0702 06:56:58.709316 1780 update_check_scheduler.cc:74] Next update check in 40m27s Jul 2 06:56:58.718756 locksmithd[1804]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 06:56:58.718756 locksmithd[1804]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 06:56:58.943000 audit[5966]: NETFILTER_CFG table=filter:138 family=2 entries=8 op=nft_register_rule pid=5966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:58.943000 audit[5966]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc41fed090 a2=0 a3=7ffc41fed07c items=0 ppid=3284 pid=5966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:58.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:56:58.947000 audit[5966]: NETFILTER_CFG table=nat:139 family=2 entries=58 op=nft_register_chain pid=5966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:56:58.947000 audit[5966]: SYSCALL arch=c000003e syscall=46 success=yes exit=20452 a0=3 a1=7ffc41fed090 a2=0 a3=7ffc41fed07c items=0 ppid=3284 pid=5966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:58.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:02.162525 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 2 06:57:02.162654 kernel: audit: type=1130 audit(1719903422.158:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.4:22-139.178.89.65:56704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:02.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.4:22-139.178.89.65:56704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:02.158424 systemd[1]: Started sshd@21-172.31.18.4:22-139.178.89.65:56704.service - OpenSSH per-connection server daemon (139.178.89.65:56704). Jul 2 06:57:02.361655 sshd[5970]: Accepted publickey for core from 139.178.89.65 port 56704 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:57:02.361000 audit[5970]: USER_ACCT pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.361000 audit[5970]: CRED_ACQ pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.375328 kernel: audit: type=1101 audit(1719903422.361:801): pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.375449 kernel: audit: type=1103 audit(1719903422.361:802): pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.375529 kernel: audit: type=1006 audit(1719903422.365:803): pid=5970 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 2 06:57:02.365000 audit[5970]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffded45fd70 a2=3 a3=7fcc0f821480 items=0 ppid=1 pid=5970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:02.379641 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:02.365000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:02.388386 kernel: audit: type=1300 audit(1719903422.365:803): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffded45fd70 a2=3 a3=7fcc0f821480 items=0 ppid=1 pid=5970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:02.391970 kernel: audit: type=1327 audit(1719903422.365:803): proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:02.428987 systemd-logind[1779]: New session 22 of user core. Jul 2 06:57:02.432248 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 06:57:02.443000 audit[5970]: USER_START pid=5970 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.448693 kernel: audit: type=1105 audit(1719903422.443:804): pid=5970 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.449000 audit[5972]: CRED_ACQ pid=5972 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.456763 kernel: audit: type=1103 audit(1719903422.449:805): pid=5972 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.633862 systemd[1]: run-containerd-runc-k8s.io-ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31-runc.acSxEv.mount: Deactivated successfully. Jul 2 06:57:02.935052 sshd[5970]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:02.946830 kernel: audit: type=1106 audit(1719903422.936:806): pid=5970 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.947083 kernel: audit: type=1104 audit(1719903422.936:807): pid=5970 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.936000 audit[5970]: USER_END pid=5970 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.936000 audit[5970]: CRED_DISP pid=5970 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:02.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.4:22-139.178.89.65:56704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:02.939515 systemd-logind[1779]: Session 22 logged out. Waiting for processes to exit. Jul 2 06:57:02.945474 systemd[1]: sshd@21-172.31.18.4:22-139.178.89.65:56704.service: Deactivated successfully. Jul 2 06:57:02.946908 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 06:57:02.950148 systemd-logind[1779]: Removed session 22. Jul 2 06:57:07.965057 systemd[1]: Started sshd@22-172.31.18.4:22-139.178.89.65:56708.service - OpenSSH per-connection server daemon (139.178.89.65:56708). Jul 2 06:57:07.968902 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:57:07.968974 kernel: audit: type=1130 audit(1719903427.963:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.4:22-139.178.89.65:56708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:07.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.4:22-139.178.89.65:56708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:08.216664 kernel: audit: type=1101 audit(1719903428.200:810): pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.217407 kernel: audit: type=1103 audit(1719903428.204:811): pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.217510 kernel: audit: type=1006 audit(1719903428.204:812): pid=6009 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 2 06:57:08.217548 kernel: audit: type=1300 audit(1719903428.204:812): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2e82a590 a2=3 a3=7f9671b97480 items=0 ppid=1 pid=6009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:08.218329 kernel: audit: type=1327 audit(1719903428.204:812): proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:08.200000 audit[6009]: USER_ACCT pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.204000 audit[6009]: CRED_ACQ pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.204000 audit[6009]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2e82a590 a2=3 a3=7f9671b97480 items=0 ppid=1 pid=6009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:08.204000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:08.218811 sshd[6009]: Accepted publickey for core from 139.178.89.65 port 56708 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:57:08.219916 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:08.237415 systemd-logind[1779]: New session 23 of user core. Jul 2 06:57:08.242735 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 06:57:08.249000 audit[6009]: USER_START pid=6009 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.254514 kernel: audit: type=1105 audit(1719903428.249:813): pid=6009 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.254000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.258568 kernel: audit: type=1103 audit(1719903428.254:814): pid=6011 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.560747 sshd[6009]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:08.561000 audit[6009]: USER_END pid=6009 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.564857 systemd[1]: sshd@22-172.31.18.4:22-139.178.89.65:56708.service: Deactivated successfully. Jul 2 06:57:08.565945 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 06:57:08.561000 audit[6009]: CRED_DISP pid=6009 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.569098 kernel: audit: type=1106 audit(1719903428.561:815): pid=6009 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.569190 kernel: audit: type=1104 audit(1719903428.561:816): pid=6009 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:08.569318 systemd-logind[1779]: Session 23 logged out. Waiting for processes to exit. Jul 2 06:57:08.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.4:22-139.178.89.65:56708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:08.571003 systemd-logind[1779]: Removed session 23. Jul 2 06:57:13.487000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:13.492670 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:57:13.492906 kernel: audit: type=1400 audit(1719903433.487:818): avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:13.496017 kernel: audit: type=1300 audit(1719903433.487:818): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001b64f00 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:13.487000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001b64f00 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:13.487000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:13.487000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:13.510886 kernel: audit: type=1327 audit(1719903433.487:818): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:13.511104 kernel: audit: type=1400 audit(1719903433.487:819): avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:13.511140 kernel: audit: type=1300 audit(1719903433.487:819): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000c9d170 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:13.487000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000c9d170 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:13.487000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:13.519496 kernel: audit: type=1327 audit(1719903433.487:819): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:13.593169 systemd[1]: Started sshd@23-172.31.18.4:22-139.178.89.65:37450.service - OpenSSH per-connection server daemon (139.178.89.65:37450). Jul 2 06:57:13.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.4:22-139.178.89.65:37450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:13.596600 kernel: audit: type=1130 audit(1719903433.592:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.4:22-139.178.89.65:37450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:13.761000 audit[6021]: USER_ACCT pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:13.768650 kernel: audit: type=1101 audit(1719903433.761:821): pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:13.768737 kernel: audit: type=1103 audit(1719903433.762:822): pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:13.768777 kernel: audit: type=1006 audit(1719903433.763:823): pid=6021 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 2 06:57:13.762000 audit[6021]: CRED_ACQ pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:13.765015 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:13.769159 sshd[6021]: Accepted publickey for core from 139.178.89.65 port 37450 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:57:13.763000 audit[6021]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4e426de0 a2=3 a3=7f130ff9a480 items=0 ppid=1 pid=6021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:13.763000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:13.775614 systemd-logind[1779]: New session 24 of user core. Jul 2 06:57:13.777724 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 06:57:13.782000 audit[6021]: USER_START pid=6021 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:13.785000 audit[6028]: CRED_ACQ pid=6028 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:14.020709 sshd[6021]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:14.021000 audit[6021]: USER_END pid=6021 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:14.021000 audit[6021]: CRED_DISP pid=6021 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:14.028548 systemd[1]: sshd@23-172.31.18.4:22-139.178.89.65:37450.service: Deactivated successfully. Jul 2 06:57:14.029554 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 06:57:14.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.4:22-139.178.89.65:37450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:14.030574 systemd-logind[1779]: Session 24 logged out. Waiting for processes to exit. Jul 2 06:57:14.031753 systemd-logind[1779]: Removed session 24. Jul 2 06:57:14.296000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:14.296000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:14.296000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=78 a1=c0070baff0 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:57:14.296000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:57:14.296000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c00487d590 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:57:14.296000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:57:14.297000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7806 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:14.297000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=78 a1=c0070bb0b0 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:57:14.297000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:57:14.298000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:14.298000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=78 a1=c0083faf60 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:57:14.298000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:57:14.302000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:14.302000 audit[2785]: AVC avc: denied { watch } for pid=2785 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c707,c915 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:14.302000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c0057066a0 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:57:14.302000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:57:14.302000 audit[2785]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=78 a1=c0070bb260 a2=fc6 a3=0 items=0 ppid=2638 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c707,c915 key=(null) Jul 2 06:57:14.302000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jul 2 06:57:17.159000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:17.159000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001aff440 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:17.159000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:17.165000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:17.165000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:17.165000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001f13220 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:17.165000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:17.165000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c001aff460 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:17.165000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:17.167000 audit[2810]: AVC avc: denied { watch } for pid=2810 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:17.167000 audit[2810]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001aff4a0 a2=fc6 a3=0 items=0 ppid=2649 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:17.167000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:19.068460 systemd[1]: Started sshd@24-172.31.18.4:22-139.178.89.65:56352.service - OpenSSH per-connection server daemon (139.178.89.65:56352). Jul 2 06:57:19.075681 kernel: kauditd_printk_skb: 37 callbacks suppressed Jul 2 06:57:19.075814 kernel: audit: type=1130 audit(1719903439.067:839): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.4:22-139.178.89.65:56352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.4:22-139.178.89.65:56352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.268000 audit[6039]: USER_ACCT pid=6039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.268000 audit[6039]: CRED_ACQ pid=6039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.271798 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:19.276741 kernel: audit: type=1101 audit(1719903439.268:840): pid=6039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.276785 kernel: audit: type=1103 audit(1719903439.268:841): pid=6039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.276816 sshd[6039]: Accepted publickey for core from 139.178.89.65 port 56352 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:57:19.280711 kernel: audit: type=1006 audit(1719903439.268:842): pid=6039 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 2 06:57:19.268000 audit[6039]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe21829d70 a2=3 a3=7f7e5c07f480 items=0 ppid=1 pid=6039 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:19.287890 kernel: audit: type=1300 audit(1719903439.268:842): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe21829d70 a2=3 a3=7f7e5c07f480 items=0 ppid=1 pid=6039 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:19.288019 kernel: audit: type=1327 audit(1719903439.268:842): proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:19.268000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:19.291359 systemd-logind[1779]: New session 25 of user core. Jul 2 06:57:19.297782 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 06:57:19.303000 audit[6039]: USER_START pid=6039 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.305000 audit[6041]: CRED_ACQ pid=6041 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.310234 kernel: audit: type=1105 audit(1719903439.303:843): pid=6039 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.310442 kernel: audit: type=1103 audit(1719903439.305:844): pid=6041 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.650131 sshd[6039]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:19.651000 audit[6039]: USER_END pid=6039 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.656250 systemd[1]: sshd@24-172.31.18.4:22-139.178.89.65:56352.service: Deactivated successfully. Jul 2 06:57:19.656597 kernel: audit: type=1106 audit(1719903439.651:845): pid=6039 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.657426 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 06:57:19.651000 audit[6039]: CRED_DISP pid=6039 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.659593 systemd-logind[1779]: Session 25 logged out. Waiting for processes to exit. Jul 2 06:57:19.663420 kernel: audit: type=1104 audit(1719903439.651:846): pid=6039 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:19.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.4:22-139.178.89.65:56352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.665275 systemd-logind[1779]: Removed session 25. Jul 2 06:57:24.692668 systemd[1]: Started sshd@25-172.31.18.4:22-139.178.89.65:56364.service - OpenSSH per-connection server daemon (139.178.89.65:56364). Jul 2 06:57:24.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.18.4:22-139.178.89.65:56364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:24.699115 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:57:24.699209 kernel: audit: type=1130 audit(1719903444.695:848): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.18.4:22-139.178.89.65:56364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:24.995000 audit[6082]: USER_ACCT pid=6082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:24.997597 sshd[6082]: Accepted publickey for core from 139.178.89.65 port 56364 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 06:57:25.009929 kernel: audit: type=1101 audit(1719903444.995:849): pid=6082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.010118 kernel: audit: type=1103 audit(1719903444.995:850): pid=6082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.010182 kernel: audit: type=1006 audit(1719903445.002:851): pid=6082 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jul 2 06:57:24.995000 audit[6082]: CRED_ACQ pid=6082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.002000 audit[6082]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1bd122f0 a2=3 a3=7fac7fe5d480 items=0 ppid=1 pid=6082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:25.013147 sshd[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:25.015192 kernel: audit: type=1300 audit(1719903445.002:851): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1bd122f0 a2=3 a3=7fac7fe5d480 items=0 ppid=1 pid=6082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:25.015553 kernel: audit: type=1327 audit(1719903445.002:851): proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:25.002000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:25.027101 systemd-logind[1779]: New session 26 of user core. Jul 2 06:57:25.029769 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 06:57:25.037000 audit[6082]: USER_START pid=6082 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.044662 kernel: audit: type=1105 audit(1719903445.037:852): pid=6082 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.048006 kernel: audit: type=1103 audit(1719903445.044:853): pid=6085 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.044000 audit[6085]: CRED_ACQ pid=6085 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.325385 sshd[6082]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:25.328000 audit[6082]: USER_END pid=6082 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.336195 kernel: audit: type=1106 audit(1719903445.328:854): pid=6082 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.336313 kernel: audit: type=1104 audit(1719903445.328:855): pid=6082 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.328000 audit[6082]: CRED_DISP pid=6082 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 06:57:25.332985 systemd[1]: sshd@25-172.31.18.4:22-139.178.89.65:56364.service: Deactivated successfully. Jul 2 06:57:25.334480 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 06:57:25.339702 systemd-logind[1779]: Session 26 logged out. Waiting for processes to exit. Jul 2 06:57:25.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.18.4:22-139.178.89.65:56364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:25.341812 systemd-logind[1779]: Removed session 26. Jul 2 06:57:32.542684 systemd[1]: run-containerd-runc-k8s.io-ad038b40dfefdadcd0ff90855826cd337b0c5e17e4a166ac1c68d4146f2fea31-runc.J5wLLx.mount: Deactivated successfully. Jul 2 06:57:39.723667 systemd[1]: cri-containerd-61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d.scope: Deactivated successfully. Jul 2 06:57:39.723991 systemd[1]: cri-containerd-61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d.scope: Consumed 4.041s CPU time. Jul 2 06:57:39.724000 audit: BPF prog-id=78 op=UNLOAD Jul 2 06:57:39.725993 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:57:39.726091 kernel: audit: type=1334 audit(1719903459.724:857): prog-id=78 op=UNLOAD Jul 2 06:57:39.724000 audit: BPF prog-id=95 op=UNLOAD Jul 2 06:57:39.728315 kernel: audit: type=1334 audit(1719903459.724:858): prog-id=95 op=UNLOAD Jul 2 06:57:39.824868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d-rootfs.mount: Deactivated successfully. Jul 2 06:57:39.845883 containerd[1789]: time="2024-07-02T06:57:39.827738771Z" level=info msg="shim disconnected" id=61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d namespace=k8s.io Jul 2 06:57:39.852450 containerd[1789]: time="2024-07-02T06:57:39.852388624Z" level=warning msg="cleaning up after shim disconnected" id=61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d namespace=k8s.io Jul 2 06:57:39.852450 containerd[1789]: time="2024-07-02T06:57:39.852438851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:39.912000 audit: BPF prog-id=110 op=UNLOAD Jul 2 06:57:39.912704 systemd[1]: cri-containerd-a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d.scope: Deactivated successfully. Jul 2 06:57:39.914962 kernel: audit: type=1334 audit(1719903459.912:859): prog-id=110 op=UNLOAD Jul 2 06:57:39.913008 systemd[1]: cri-containerd-a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d.scope: Consumed 6.064s CPU time. Jul 2 06:57:39.918000 audit: BPF prog-id=113 op=UNLOAD Jul 2 06:57:39.920543 kernel: audit: type=1334 audit(1719903459.918:860): prog-id=113 op=UNLOAD Jul 2 06:57:39.981280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d-rootfs.mount: Deactivated successfully. Jul 2 06:57:39.990308 containerd[1789]: time="2024-07-02T06:57:39.990146909Z" level=info msg="shim disconnected" id=a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d namespace=k8s.io Jul 2 06:57:39.991358 containerd[1789]: time="2024-07-02T06:57:39.990312243Z" level=warning msg="cleaning up after shim disconnected" id=a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d namespace=k8s.io Jul 2 06:57:39.991358 containerd[1789]: time="2024-07-02T06:57:39.990327487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:40.117843 kubelet[3113]: I0702 06:57:40.117797 3113 scope.go:117] "RemoveContainer" containerID="61b53c6d291e8e036dc648e366d8ae63a003f41c01a3244ca153b6b7ceb5899d" Jul 2 06:57:40.142999 containerd[1789]: time="2024-07-02T06:57:40.142956654Z" level=info msg="CreateContainer within sandbox \"29e9e2d9032c21edf8e7c461ef4f8b5ac6695d0def3ea5385b7f3242f6ba5b80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 06:57:40.210239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3190714156.mount: Deactivated successfully. Jul 2 06:57:40.223979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1084523586.mount: Deactivated successfully. Jul 2 06:57:40.228759 kubelet[3113]: E0702 06:57:40.226366 3113 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-18-4)" Jul 2 06:57:40.235456 containerd[1789]: time="2024-07-02T06:57:40.234871066Z" level=info msg="CreateContainer within sandbox \"29e9e2d9032c21edf8e7c461ef4f8b5ac6695d0def3ea5385b7f3242f6ba5b80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fe588cddbb6bb7f1951ccb653156e8074ff4717b6dba07e900417fc501e63885\"" Jul 2 06:57:40.237287 containerd[1789]: time="2024-07-02T06:57:40.237243479Z" level=info msg="StartContainer for \"fe588cddbb6bb7f1951ccb653156e8074ff4717b6dba07e900417fc501e63885\"" Jul 2 06:57:40.289113 systemd[1]: Started cri-containerd-fe588cddbb6bb7f1951ccb653156e8074ff4717b6dba07e900417fc501e63885.scope - libcontainer container fe588cddbb6bb7f1951ccb653156e8074ff4717b6dba07e900417fc501e63885. Jul 2 06:57:40.328000 audit: BPF prog-id=196 op=LOAD Jul 2 06:57:40.330521 kernel: audit: type=1334 audit(1719903460.328:861): prog-id=196 op=LOAD Jul 2 06:57:40.343130 kernel: audit: type=1334 audit(1719903460.331:862): prog-id=197 op=LOAD Jul 2 06:57:40.343423 kernel: audit: type=1300 audit(1719903460.331:862): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2649 pid=6192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:40.343528 kernel: audit: type=1327 audit(1719903460.331:862): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665353838636464626236626237663139353163636236353331353665 Jul 2 06:57:40.343602 kernel: audit: type=1334 audit(1719903460.331:863): prog-id=198 op=LOAD Jul 2 06:57:40.331000 audit: BPF prog-id=197 op=LOAD Jul 2 06:57:40.331000 audit[6192]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2649 pid=6192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:40.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665353838636464626236626237663139353163636236353331353665 Jul 2 06:57:40.331000 audit: BPF prog-id=198 op=LOAD Jul 2 06:57:40.331000 audit[6192]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2649 pid=6192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:40.353535 kernel: audit: type=1300 audit(1719903460.331:863): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2649 pid=6192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:40.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665353838636464626236626237663139353163636236353331353665 Jul 2 06:57:40.332000 audit: BPF prog-id=198 op=UNLOAD Jul 2 06:57:40.332000 audit: BPF prog-id=197 op=UNLOAD Jul 2 06:57:40.332000 audit: BPF prog-id=199 op=LOAD Jul 2 06:57:40.332000 audit[6192]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2649 pid=6192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:40.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665353838636464626236626237663139353163636236353331353665 Jul 2 06:57:40.395693 containerd[1789]: time="2024-07-02T06:57:40.395630255Z" level=info msg="StartContainer for \"fe588cddbb6bb7f1951ccb653156e8074ff4717b6dba07e900417fc501e63885\" returns successfully" Jul 2 06:57:41.095236 kubelet[3113]: I0702 06:57:41.095207 3113 scope.go:117] "RemoveContainer" containerID="a7a5607da2722fb73c858c8266df854ec5c60719b51267de9a910c319af74f8d" Jul 2 06:57:41.101359 containerd[1789]: time="2024-07-02T06:57:41.101313604Z" level=info msg="CreateContainer within sandbox \"b441cf35075130d1e0151f943210f30938f1826c8d4d8545b9aa2711f7f10cbb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 06:57:41.140433 containerd[1789]: time="2024-07-02T06:57:41.140381248Z" level=info msg="CreateContainer within sandbox \"b441cf35075130d1e0151f943210f30938f1826c8d4d8545b9aa2711f7f10cbb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"24c784d3c78e31e0f4f5c850f5cf35c23154b408ddea7b49946aa653fc42006c\"" Jul 2 06:57:41.141097 containerd[1789]: time="2024-07-02T06:57:41.141062452Z" level=info msg="StartContainer for \"24c784d3c78e31e0f4f5c850f5cf35c23154b408ddea7b49946aa653fc42006c\"" Jul 2 06:57:41.190730 systemd[1]: Started cri-containerd-24c784d3c78e31e0f4f5c850f5cf35c23154b408ddea7b49946aa653fc42006c.scope - libcontainer container 24c784d3c78e31e0f4f5c850f5cf35c23154b408ddea7b49946aa653fc42006c. Jul 2 06:57:41.222000 audit: BPF prog-id=200 op=LOAD Jul 2 06:57:41.223000 audit: BPF prog-id=201 op=LOAD Jul 2 06:57:41.223000 audit[6230]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3239 pid=6230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:41.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633738346433633738653331653066346635633835306635636633 Jul 2 06:57:41.223000 audit: BPF prog-id=202 op=LOAD Jul 2 06:57:41.223000 audit[6230]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3239 pid=6230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:41.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633738346433633738653331653066346635633835306635636633 Jul 2 06:57:41.223000 audit: BPF prog-id=202 op=UNLOAD Jul 2 06:57:41.223000 audit: BPF prog-id=201 op=UNLOAD Jul 2 06:57:41.223000 audit: BPF prog-id=203 op=LOAD Jul 2 06:57:41.223000 audit[6230]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3239 pid=6230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:41.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633738346433633738653331653066346635633835306635636633 Jul 2 06:57:41.292606 containerd[1789]: time="2024-07-02T06:57:41.292558028Z" level=info msg="StartContainer for \"24c784d3c78e31e0f4f5c850f5cf35c23154b408ddea7b49946aa653fc42006c\" returns successfully" Jul 2 06:57:42.038000 audit[6203]: AVC avc: denied { watch } for pid=6203 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7804 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:42.038000 audit[6203]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0003f4a50 a2=fc6 a3=0 items=0 ppid=2649 pid=6203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:42.038000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:42.040000 audit[6203]: AVC avc: denied { watch } for pid=6203 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7798 scontext=system_u:system_r:container_t:s0:c163,c707 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:57:42.040000 audit[6203]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000b3c060 a2=fc6 a3=0 items=0 ppid=2649 pid=6203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c163,c707 key=(null) Jul 2 06:57:42.040000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:57:42.381129 systemd[1]: run-containerd-runc-k8s.io-aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8-runc.UXewKN.mount: Deactivated successfully. Jul 2 06:57:44.793139 systemd[1]: cri-containerd-2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c.scope: Deactivated successfully. Jul 2 06:57:44.793496 systemd[1]: cri-containerd-2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c.scope: Consumed 1.982s CPU time. Jul 2 06:57:44.801134 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 2 06:57:44.801275 kernel: audit: type=1334 audit(1719903464.797:875): prog-id=74 op=UNLOAD Jul 2 06:57:44.801321 kernel: audit: type=1334 audit(1719903464.797:876): prog-id=87 op=UNLOAD Jul 2 06:57:44.797000 audit: BPF prog-id=74 op=UNLOAD Jul 2 06:57:44.797000 audit: BPF prog-id=87 op=UNLOAD Jul 2 06:57:44.825966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c-rootfs.mount: Deactivated successfully. Jul 2 06:57:44.827027 containerd[1789]: time="2024-07-02T06:57:44.826963082Z" level=info msg="shim disconnected" id=2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c namespace=k8s.io Jul 2 06:57:44.827587 containerd[1789]: time="2024-07-02T06:57:44.827559668Z" level=warning msg="cleaning up after shim disconnected" id=2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c namespace=k8s.io Jul 2 06:57:44.827727 containerd[1789]: time="2024-07-02T06:57:44.827699082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:45.144064 kubelet[3113]: I0702 06:57:45.144025 3113 scope.go:117] "RemoveContainer" containerID="2f51875446da457db5cce471586537e646be6323dffc4481bdd0efc8523cfd3c" Jul 2 06:57:45.153375 containerd[1789]: time="2024-07-02T06:57:45.153322083Z" level=info msg="CreateContainer within sandbox \"a07682c5838ea5b98fb03e7993ab4ac4a50e583d428fa99237187cce4e53f391\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 06:57:45.207183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267325863.mount: Deactivated successfully. Jul 2 06:57:45.217184 containerd[1789]: time="2024-07-02T06:57:45.217134088Z" level=info msg="CreateContainer within sandbox \"a07682c5838ea5b98fb03e7993ab4ac4a50e583d428fa99237187cce4e53f391\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8067c80228c1396e51ed0c2a108c4327062f40dd83fecd6e626d498c82a60a24\"" Jul 2 06:57:45.217754 containerd[1789]: time="2024-07-02T06:57:45.217717894Z" level=info msg="StartContainer for \"8067c80228c1396e51ed0c2a108c4327062f40dd83fecd6e626d498c82a60a24\"" Jul 2 06:57:45.268803 systemd[1]: Started cri-containerd-8067c80228c1396e51ed0c2a108c4327062f40dd83fecd6e626d498c82a60a24.scope - libcontainer container 8067c80228c1396e51ed0c2a108c4327062f40dd83fecd6e626d498c82a60a24. Jul 2 06:57:45.306000 audit: BPF prog-id=204 op=LOAD Jul 2 06:57:45.318256 kernel: audit: type=1334 audit(1719903465.306:877): prog-id=204 op=LOAD Jul 2 06:57:45.318477 kernel: audit: type=1334 audit(1719903465.307:878): prog-id=205 op=LOAD Jul 2 06:57:45.318539 kernel: audit: type=1300 audit(1719903465.307:878): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2637 pid=6316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:45.318704 kernel: audit: type=1327 audit(1719903465.307:878): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363763383032323863313339366535316564306332613130386334 Jul 2 06:57:45.307000 audit: BPF prog-id=205 op=LOAD Jul 2 06:57:45.307000 audit[6316]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2637 pid=6316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:45.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363763383032323863313339366535316564306332613130386334 Jul 2 06:57:45.326343 kernel: audit: type=1334 audit(1719903465.307:879): prog-id=206 op=LOAD Jul 2 06:57:45.332426 kernel: audit: type=1300 audit(1719903465.307:879): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2637 pid=6316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:45.332637 kernel: audit: type=1327 audit(1719903465.307:879): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363763383032323863313339366535316564306332613130386334 Jul 2 06:57:45.307000 audit: BPF prog-id=206 op=LOAD Jul 2 06:57:45.307000 audit[6316]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2637 pid=6316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:45.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363763383032323863313339366535316564306332613130386334 Jul 2 06:57:45.307000 audit: BPF prog-id=206 op=UNLOAD Jul 2 06:57:45.307000 audit: BPF prog-id=205 op=UNLOAD Jul 2 06:57:45.307000 audit: BPF prog-id=207 op=LOAD Jul 2 06:57:45.307000 audit[6316]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2637 pid=6316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:45.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363763383032323863313339366535316564306332613130386334 Jul 2 06:57:45.334588 kernel: audit: type=1334 audit(1719903465.307:880): prog-id=206 op=UNLOAD Jul 2 06:57:45.402403 containerd[1789]: time="2024-07-02T06:57:45.402058533Z" level=info msg="StartContainer for \"8067c80228c1396e51ed0c2a108c4327062f40dd83fecd6e626d498c82a60a24\" returns successfully" Jul 2 06:57:50.237038 kubelet[3113]: E0702 06:57:50.236978 3113 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 06:57:51.483641 systemd[1]: run-containerd-runc-k8s.io-aa30dd67261264c20d301e36687e553140148423c189612b99decc8ac8852ff8-runc.Uwe5ua.mount: Deactivated successfully.