Jun 25 16:19:24.965093 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:19:24.965117 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:19:24.965126 kernel: BIOS-provided physical RAM map: Jun 25 16:19:24.965133 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 16:19:24.965139 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 16:19:24.965145 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 16:19:24.965155 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jun 25 16:19:24.965161 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jun 25 16:19:24.965167 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jun 25 16:19:24.965173 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 16:19:24.965180 kernel: NX (Execute Disable) protection: active Jun 25 16:19:24.965186 kernel: SMBIOS 2.7 present. Jun 25 16:19:24.965192 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jun 25 16:19:24.965199 kernel: Hypervisor detected: KVM Jun 25 16:19:24.965209 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:19:24.965216 kernel: kvm-clock: using sched offset of 7978462893 cycles Jun 25 16:19:24.965223 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:19:24.965231 kernel: tsc: Detected 2499.996 MHz processor Jun 25 16:19:24.965238 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:19:24.965245 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:19:24.965252 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jun 25 16:19:24.965270 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:19:24.965279 kernel: Using GB pages for direct mapping Jun 25 16:19:24.965286 kernel: ACPI: Early table checksum verification disabled Jun 25 16:19:24.965293 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jun 25 16:19:24.965300 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jun 25 16:19:24.965307 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 25 16:19:24.965314 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jun 25 16:19:24.965321 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jun 25 16:19:24.965330 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 16:19:24.965337 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 25 16:19:24.965344 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jun 25 16:19:24.965351 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 25 16:19:24.965358 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jun 25 16:19:24.965365 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jun 25 16:19:24.965371 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 16:19:24.965378 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jun 25 16:19:24.965385 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jun 25 16:19:24.965395 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jun 25 16:19:24.965402 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jun 25 16:19:24.965412 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jun 25 16:19:24.965420 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jun 25 16:19:24.965428 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jun 25 16:19:24.965435 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jun 25 16:19:24.965445 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jun 25 16:19:24.965453 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jun 25 16:19:24.965461 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:19:24.965468 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 16:19:24.965476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jun 25 16:19:24.965483 kernel: NUMA: Initialized distance table, cnt=1 Jun 25 16:19:24.965491 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jun 25 16:19:24.965499 kernel: Zone ranges: Jun 25 16:19:24.965571 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:19:24.965585 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jun 25 16:19:24.965593 kernel: Normal empty Jun 25 16:19:24.965600 kernel: Movable zone start for each node Jun 25 16:19:24.965608 kernel: Early memory node ranges Jun 25 16:19:24.965616 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 16:19:24.965623 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jun 25 16:19:24.965631 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jun 25 16:19:24.965639 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:19:24.965646 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 16:19:24.965656 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jun 25 16:19:24.965664 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 16:19:24.965672 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:19:24.965679 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jun 25 16:19:24.965687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:19:24.965695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:19:24.965702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:19:24.965710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:19:24.965718 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:19:24.965728 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 16:19:24.965736 kernel: TSC deadline timer available Jun 25 16:19:24.965743 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 16:19:24.965751 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jun 25 16:19:24.965759 kernel: Booting paravirtualized kernel on KVM Jun 25 16:19:24.965766 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:19:24.965774 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 16:19:24.965782 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jun 25 16:19:24.965790 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jun 25 16:19:24.965800 kernel: pcpu-alloc: [0] 0 1 Jun 25 16:19:24.965807 kernel: kvm-guest: PV spinlocks enabled Jun 25 16:19:24.965815 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:19:24.965823 kernel: Fallback order for Node 0: 0 Jun 25 16:19:24.965830 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jun 25 16:19:24.965837 kernel: Policy zone: DMA32 Jun 25 16:19:24.965846 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:19:24.965854 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:19:24.965864 kernel: random: crng init done Jun 25 16:19:24.965872 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:19:24.965880 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:19:24.965888 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:19:24.965896 kernel: Memory: 1928268K/2057760K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 129232K reserved, 0K cma-reserved) Jun 25 16:19:24.965903 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 16:19:24.965911 kernel: Kernel/User page tables isolation: enabled Jun 25 16:19:24.965918 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:19:24.965926 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:19:24.965936 kernel: Dynamic Preempt: voluntary Jun 25 16:19:24.965944 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:19:24.965952 kernel: rcu: RCU event tracing is enabled. Jun 25 16:19:24.965960 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 16:19:24.965968 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:19:24.965975 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:19:24.965983 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:19:24.965991 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:19:24.965998 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 16:19:24.966008 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 16:19:24.966016 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:19:24.966024 kernel: Console: colour VGA+ 80x25 Jun 25 16:19:24.966031 kernel: printk: console [ttyS0] enabled Jun 25 16:19:24.966039 kernel: ACPI: Core revision 20220331 Jun 25 16:19:24.966047 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jun 25 16:19:24.966065 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:19:24.966073 kernel: x2apic enabled Jun 25 16:19:24.966080 kernel: Switched APIC routing to physical x2apic. Jun 25 16:19:24.966088 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jun 25 16:19:24.966099 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jun 25 16:19:24.966106 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 16:19:24.966122 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 16:19:24.966133 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:19:24.966141 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:19:24.966149 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:19:24.966157 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:19:24.966165 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 16:19:24.966219 kernel: RETBleed: Vulnerable Jun 25 16:19:24.966228 kernel: Speculative Store Bypass: Vulnerable Jun 25 16:19:24.966236 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:19:24.966244 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:19:24.966252 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 16:19:24.966263 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:19:24.966313 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:19:24.966322 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:19:24.966420 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jun 25 16:19:24.966430 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jun 25 16:19:24.966443 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 16:19:24.966451 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 16:19:24.966459 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 16:19:24.966496 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 25 16:19:24.966508 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:19:24.966517 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jun 25 16:19:24.966525 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jun 25 16:19:24.966534 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jun 25 16:19:24.966542 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jun 25 16:19:24.966550 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jun 25 16:19:24.966587 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jun 25 16:19:24.966716 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jun 25 16:19:24.966734 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:19:24.966772 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:19:24.966783 kernel: LSM: Security Framework initializing Jun 25 16:19:24.966791 kernel: SELinux: Initializing. Jun 25 16:19:24.966800 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:19:24.966808 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:19:24.966816 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 16:19:24.966849 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:19:24.966863 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:19:24.966872 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:19:24.966881 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:19:24.966892 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:19:24.966900 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:19:24.966936 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 16:19:24.966948 kernel: signal: max sigframe size: 3632 Jun 25 16:19:24.966958 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:19:24.966967 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:19:24.966975 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:19:24.966984 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:19:24.967019 kernel: x86: Booting SMP configuration: Jun 25 16:19:24.967035 kernel: .... node #0, CPUs: #1 Jun 25 16:19:24.967045 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 25 16:19:24.967065 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 16:19:24.967099 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:19:24.967113 kernel: smpboot: Max logical packages: 1 Jun 25 16:19:24.967122 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jun 25 16:19:24.967131 kernel: devtmpfs: initialized Jun 25 16:19:24.967140 kernel: x86/mm: Memory block size: 128MB Jun 25 16:19:24.967148 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:19:24.967188 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 16:19:24.967199 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:19:24.967209 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:19:24.967218 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:19:24.967227 kernel: audit: type=2000 audit(1719332363.819:1): state=initialized audit_enabled=0 res=1 Jun 25 16:19:24.967235 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:19:24.967350 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:19:24.967363 kernel: cpuidle: using governor menu Jun 25 16:19:24.967373 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:19:24.967385 kernel: dca service started, version 1.12.1 Jun 25 16:19:24.967393 kernel: PCI: Using configuration type 1 for base access Jun 25 16:19:24.967401 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:19:24.967410 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:19:24.967418 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:19:24.967427 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:19:24.967435 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:19:24.967443 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:19:24.967452 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:19:24.967462 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:19:24.967471 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:19:24.967479 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 25 16:19:24.967487 kernel: ACPI: Interpreter enabled Jun 25 16:19:24.967496 kernel: ACPI: PM: (supports S0 S5) Jun 25 16:19:24.967505 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:19:24.967513 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:19:24.967522 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:19:24.967530 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 25 16:19:24.967541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:19:24.967704 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:19:24.967793 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 16:19:24.967876 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jun 25 16:19:24.967887 kernel: acpiphp: Slot [3] registered Jun 25 16:19:24.967895 kernel: acpiphp: Slot [4] registered Jun 25 16:19:24.967904 kernel: acpiphp: Slot [5] registered Jun 25 16:19:24.967915 kernel: acpiphp: Slot [6] registered Jun 25 16:19:24.967924 kernel: acpiphp: Slot [7] registered Jun 25 16:19:24.967932 kernel: acpiphp: Slot [8] registered Jun 25 16:19:24.967940 kernel: acpiphp: Slot [9] registered Jun 25 16:19:24.967948 kernel: acpiphp: Slot [10] registered Jun 25 16:19:24.967957 kernel: acpiphp: Slot [11] registered Jun 25 16:19:24.967965 kernel: acpiphp: Slot [12] registered Jun 25 16:19:24.967973 kernel: acpiphp: Slot [13] registered Jun 25 16:19:24.967981 kernel: acpiphp: Slot [14] registered Jun 25 16:19:24.967989 kernel: acpiphp: Slot [15] registered Jun 25 16:19:24.967999 kernel: acpiphp: Slot [16] registered Jun 25 16:19:24.968007 kernel: acpiphp: Slot [17] registered Jun 25 16:19:24.968016 kernel: acpiphp: Slot [18] registered Jun 25 16:19:24.968024 kernel: acpiphp: Slot [19] registered Jun 25 16:19:24.968032 kernel: acpiphp: Slot [20] registered Jun 25 16:19:24.968040 kernel: acpiphp: Slot [21] registered Jun 25 16:19:24.968048 kernel: acpiphp: Slot [22] registered Jun 25 16:19:24.968067 kernel: acpiphp: Slot [23] registered Jun 25 16:19:24.968076 kernel: acpiphp: Slot [24] registered Jun 25 16:19:24.968086 kernel: acpiphp: Slot [25] registered Jun 25 16:19:24.968094 kernel: acpiphp: Slot [26] registered Jun 25 16:19:24.968102 kernel: acpiphp: Slot [27] registered Jun 25 16:19:24.968110 kernel: acpiphp: Slot [28] registered Jun 25 16:19:24.968119 kernel: acpiphp: Slot [29] registered Jun 25 16:19:24.968127 kernel: acpiphp: Slot [30] registered Jun 25 16:19:24.968135 kernel: acpiphp: Slot [31] registered Jun 25 16:19:24.968143 kernel: PCI host bridge to bus 0000:00 Jun 25 16:19:24.968230 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:19:24.968569 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:19:24.968723 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:19:24.969008 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 16:19:24.969359 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:19:24.969686 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:19:24.970146 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:19:24.970306 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jun 25 16:19:24.970448 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 16:19:24.970576 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 16:19:24.970704 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jun 25 16:19:24.970893 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jun 25 16:19:24.971117 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jun 25 16:19:24.971605 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jun 25 16:19:24.971743 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jun 25 16:19:24.971863 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jun 25 16:19:24.971995 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jun 25 16:19:24.972336 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jun 25 16:19:24.972460 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 25 16:19:24.972576 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:19:24.972703 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 25 16:19:24.972828 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jun 25 16:19:24.973859 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 25 16:19:24.974112 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jun 25 16:19:24.974137 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:19:24.974151 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:19:24.974166 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:19:24.974181 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:19:24.974197 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:19:24.974219 kernel: iommu: Default domain type: Translated Jun 25 16:19:24.974232 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:19:24.974246 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:19:24.974260 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:19:24.974275 kernel: PTP clock support registered Jun 25 16:19:24.974291 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:19:24.974305 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:19:24.974320 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 16:19:24.974336 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jun 25 16:19:24.974566 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jun 25 16:19:24.974706 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jun 25 16:19:24.974842 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:19:24.974861 kernel: vgaarb: loaded Jun 25 16:19:24.974875 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jun 25 16:19:24.974983 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jun 25 16:19:24.975002 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:19:24.975017 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:19:24.975036 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:19:24.975052 kernel: pnp: PnP ACPI init Jun 25 16:19:24.975120 kernel: pnp: PnP ACPI: found 5 devices Jun 25 16:19:24.975136 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:19:24.975228 kernel: NET: Registered PF_INET protocol family Jun 25 16:19:24.975244 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:19:24.975261 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 16:19:24.975277 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:19:24.975293 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:19:24.975488 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 16:19:24.975507 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 16:19:24.975524 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:19:24.975540 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:19:24.975555 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:19:24.975571 kernel: NET: Registered PF_XDP protocol family Jun 25 16:19:24.975719 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:19:24.975838 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:19:24.976101 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:19:24.976219 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 16:19:24.976354 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:19:24.976376 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:19:24.976392 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:19:24.976408 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jun 25 16:19:24.976424 kernel: clocksource: Switched to clocksource tsc Jun 25 16:19:24.976440 kernel: Initialise system trusted keyrings Jun 25 16:19:24.976459 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 16:19:24.976475 kernel: Key type asymmetric registered Jun 25 16:19:24.976490 kernel: Asymmetric key parser 'x509' registered Jun 25 16:19:24.976505 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:19:24.976608 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:19:24.976626 kernel: io scheduler mq-deadline registered Jun 25 16:19:24.976642 kernel: io scheduler kyber registered Jun 25 16:19:24.976657 kernel: io scheduler bfq registered Jun 25 16:19:24.976673 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:19:24.976793 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:19:24.976812 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:19:24.976828 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:19:24.976844 kernel: i8042: Warning: Keylock active Jun 25 16:19:24.976860 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:19:24.976876 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:19:24.977329 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 25 16:19:24.977462 kernel: rtc_cmos 00:00: registered as rtc0 Jun 25 16:19:24.977799 kernel: rtc_cmos 00:00: setting system clock to 2024-06-25T16:19:24 UTC (1719332364) Jun 25 16:19:24.977923 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 25 16:19:24.977985 kernel: intel_pstate: CPU model not supported Jun 25 16:19:24.978001 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:19:24.978096 kernel: Segment Routing with IPv6 Jun 25 16:19:24.978112 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:19:24.978126 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:19:24.978139 kernel: Key type dns_resolver registered Jun 25 16:19:24.978200 kernel: IPI shorthand broadcast: enabled Jun 25 16:19:24.978222 kernel: sched_clock: Marking stable (664286167, 270761249)->(1023731849, -88684433) Jun 25 16:19:24.978237 kernel: registered taskstats version 1 Jun 25 16:19:24.978373 kernel: Loading compiled-in X.509 certificates Jun 25 16:19:24.978388 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:19:24.978402 kernel: Key type .fscrypt registered Jun 25 16:19:24.978416 kernel: Key type fscrypt-provisioning registered Jun 25 16:19:24.978456 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:19:24.978471 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:19:24.978486 kernel: ima: No architecture policies found Jun 25 16:19:24.978504 kernel: clk: Disabling unused clocks Jun 25 16:19:24.978543 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:19:24.978558 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:19:24.978571 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:19:24.978584 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:19:24.978622 kernel: Run /init as init process Jun 25 16:19:24.978635 kernel: with arguments: Jun 25 16:19:24.978648 kernel: /init Jun 25 16:19:24.978660 kernel: with environment: Jun 25 16:19:24.978707 kernel: HOME=/ Jun 25 16:19:24.978955 kernel: TERM=linux Jun 25 16:19:24.978975 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:19:24.978996 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:19:24.979016 systemd[1]: Detected virtualization amazon. Jun 25 16:19:24.979034 systemd[1]: Detected architecture x86-64. Jun 25 16:19:24.979216 systemd[1]: Running in initrd. Jun 25 16:19:24.979244 systemd[1]: No hostname configured, using default hostname. Jun 25 16:19:24.979260 systemd[1]: Hostname set to . Jun 25 16:19:24.979277 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:19:24.979293 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:19:24.979309 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:19:24.979322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:19:24.979337 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:19:24.979353 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:19:24.979371 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:19:24.979387 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:19:24.979404 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:19:24.979421 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:19:24.979437 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:19:24.979454 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:19:24.979468 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:19:24.979485 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:19:24.979753 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:19:24.979771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:19:24.979813 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:19:24.979831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:19:24.979848 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:19:24.979865 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:19:24.979882 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:19:24.979897 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:19:24.979916 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:19:24.979932 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:19:24.979947 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:19:24.979964 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:19:24.979990 systemd-journald[180]: Journal started Jun 25 16:19:24.980083 systemd-journald[180]: Runtime Journal (/run/log/journal/ec2cacf7f55889f6096240e4f7d51b3f) is 4.8M, max 38.6M, 33.8M free. Jun 25 16:19:24.992102 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:19:25.013150 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:19:25.019401 systemd-modules-load[181]: Inserted module 'overlay' Jun 25 16:19:25.148456 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:19:25.148491 kernel: Bridge firewalling registered Jun 25 16:19:25.148509 kernel: SCSI subsystem initialized Jun 25 16:19:25.148526 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:19:25.148544 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:19:25.148560 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:19:25.068383 systemd-modules-load[181]: Inserted module 'br_netfilter' Jun 25 16:19:25.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.131410 systemd-modules-load[181]: Inserted module 'dm_multipath' Jun 25 16:19:25.155330 kernel: audit: type=1130 audit(1719332365.149:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.151202 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:19:25.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.158526 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:19:25.162929 kernel: audit: type=1130 audit(1719332365.157:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.163473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:19:25.169907 kernel: audit: type=1130 audit(1719332365.161:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.169944 kernel: audit: type=1130 audit(1719332365.164:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.176318 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:19:25.182552 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:19:25.183427 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:19:25.195577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:19:25.198128 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:19:25.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.196000 audit: BPF prog-id=6 op=LOAD Jun 25 16:19:25.207073 kernel: audit: type=1130 audit(1719332365.195:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.207096 kernel: audit: type=1334 audit(1719332365.196:7): prog-id=6 op=LOAD Jun 25 16:19:25.210170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:19:25.212673 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:19:25.220814 kernel: audit: type=1130 audit(1719332365.209:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.220846 kernel: audit: type=1130 audit(1719332365.214:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.229508 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:19:25.250471 dracut-cmdline[205]: dracut-dracut-053 Jun 25 16:19:25.254396 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:19:25.269246 systemd-resolved[197]: Positive Trust Anchors: Jun 25 16:19:25.269690 systemd-resolved[197]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:19:25.269744 systemd-resolved[197]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:19:25.282820 systemd-resolved[197]: Defaulting to hostname 'linux'. Jun 25 16:19:25.286266 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:19:25.292903 kernel: audit: type=1130 audit(1719332365.287:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.288318 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:19:25.348100 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:19:25.362083 kernel: iscsi: registered transport (tcp) Jun 25 16:19:25.389293 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:19:25.389370 kernel: QLogic iSCSI HBA Driver Jun 25 16:19:25.433146 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:19:25.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.439375 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:19:25.568178 kernel: raid6: avx512x4 gen() 16006 MB/s Jun 25 16:19:25.586115 kernel: raid6: avx512x2 gen() 13661 MB/s Jun 25 16:19:25.603157 kernel: raid6: avx512x1 gen() 16419 MB/s Jun 25 16:19:25.620316 kernel: raid6: avx2x4 gen() 14693 MB/s Jun 25 16:19:25.637118 kernel: raid6: avx2x2 gen() 14115 MB/s Jun 25 16:19:25.654414 kernel: raid6: avx2x1 gen() 12125 MB/s Jun 25 16:19:25.654483 kernel: raid6: using algorithm avx512x1 gen() 16419 MB/s Jun 25 16:19:25.672293 kernel: raid6: .... xor() 16374 MB/s, rmw enabled Jun 25 16:19:25.672370 kernel: raid6: using avx512x2 recovery algorithm Jun 25 16:19:25.681084 kernel: xor: automatically using best checksumming function avx Jun 25 16:19:25.903088 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:19:25.914040 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:19:25.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.915000 audit: BPF prog-id=7 op=LOAD Jun 25 16:19:25.915000 audit: BPF prog-id=8 op=LOAD Jun 25 16:19:25.924384 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:19:25.949728 systemd-udevd[382]: Using default interface naming scheme 'v252'. Jun 25 16:19:25.955547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:19:25.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:25.963715 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:19:25.990342 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Jun 25 16:19:26.031920 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:19:26.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:26.039348 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:19:26.100664 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:19:26.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:26.173397 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 25 16:19:26.228881 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 25 16:19:26.229171 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jun 25 16:19:26.229325 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:19:26.229353 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:2d:3d:74:06:4d Jun 25 16:19:26.229491 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:19:26.229513 kernel: AES CTR mode by8 optimization enabled Jun 25 16:19:26.248833 (udev-worker)[440]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:19:26.291434 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 25 16:19:26.295318 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 16:19:26.312145 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 25 16:19:26.318162 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:19:26.318225 kernel: GPT:9289727 != 16777215 Jun 25 16:19:26.318243 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:19:26.318261 kernel: GPT:9289727 != 16777215 Jun 25 16:19:26.318276 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:19:26.318293 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:19:26.420085 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (435) Jun 25 16:19:26.492147 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 16:19:26.511374 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 25 16:19:26.527087 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (430) Jun 25 16:19:26.578957 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 25 16:19:26.592697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 25 16:19:26.595745 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 25 16:19:26.616353 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:19:26.626310 disk-uuid[600]: Primary Header is updated. Jun 25 16:19:26.626310 disk-uuid[600]: Secondary Entries is updated. Jun 25 16:19:26.626310 disk-uuid[600]: Secondary Header is updated. Jun 25 16:19:26.631171 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:19:26.643137 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:19:26.650083 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:19:27.649049 disk-uuid[601]: The operation has completed successfully. Jun 25 16:19:27.655917 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 16:19:27.859730 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:19:27.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:27.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:27.859910 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:19:27.872508 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:19:27.878400 sh[943]: Success Jun 25 16:19:27.911199 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:19:28.066674 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:19:28.072023 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:19:28.080019 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:19:28.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.107580 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:19:28.107732 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:19:28.107782 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:19:28.111539 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:19:28.111602 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:19:28.244098 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 16:19:28.288208 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:19:28.288509 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:19:28.294328 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:19:28.320114 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:19:28.346032 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:19:28.346121 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:19:28.346140 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 16:19:28.367693 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 16:19:28.377598 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:19:28.381086 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:19:28.390847 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:19:28.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.394311 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:19:28.443534 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:19:28.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.443000 audit: BPF prog-id=9 op=LOAD Jun 25 16:19:28.449448 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:19:28.477381 systemd-networkd[1133]: lo: Link UP Jun 25 16:19:28.477392 systemd-networkd[1133]: lo: Gained carrier Jun 25 16:19:28.479555 systemd-networkd[1133]: Enumeration completed Jun 25 16:19:28.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.479874 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:19:28.479879 systemd-networkd[1133]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:19:28.480031 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:19:28.482032 systemd[1]: Reached target network.target - Network. Jun 25 16:19:28.493394 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:19:28.498823 systemd-networkd[1133]: eth0: Link UP Jun 25 16:19:28.498942 systemd-networkd[1133]: eth0: Gained carrier Jun 25 16:19:28.498958 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:19:28.505659 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:19:28.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.520223 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:19:28.543264 iscsid[1138]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:19:28.543264 iscsid[1138]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:19:28.543264 iscsid[1138]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:19:28.543264 iscsid[1138]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:19:28.543264 iscsid[1138]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:19:28.543264 iscsid[1138]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:19:28.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.545897 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:19:28.554299 systemd-networkd[1133]: eth0: DHCPv4 address 172.31.30.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 16:19:28.565630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:19:28.589966 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:19:28.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.590245 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:19:28.594133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:19:28.595664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:19:28.605122 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:19:28.618885 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:19:28.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.778386 ignition[1085]: Ignition 2.15.0 Jun 25 16:19:28.778400 ignition[1085]: Stage: fetch-offline Jun 25 16:19:28.779212 ignition[1085]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:19:28.779227 ignition[1085]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:19:28.784900 ignition[1085]: Ignition finished successfully Jun 25 16:19:28.786686 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:19:28.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.791303 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 16:19:28.812868 ignition[1157]: Ignition 2.15.0 Jun 25 16:19:28.812891 ignition[1157]: Stage: fetch Jun 25 16:19:28.813265 ignition[1157]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:19:28.813280 ignition[1157]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:19:28.813399 ignition[1157]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:19:28.827264 ignition[1157]: PUT result: OK Jun 25 16:19:28.831649 ignition[1157]: parsed url from cmdline: "" Jun 25 16:19:28.831692 ignition[1157]: no config URL provided Jun 25 16:19:28.831703 ignition[1157]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:19:28.831720 ignition[1157]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:19:28.831760 ignition[1157]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:19:28.832704 ignition[1157]: PUT result: OK Jun 25 16:19:28.832753 ignition[1157]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 25 16:19:28.837904 ignition[1157]: GET result: OK Jun 25 16:19:28.838053 ignition[1157]: parsing config with SHA512: 18c8311da7460fa60b27c84f74b64f7eac2e03ec29dfd88e1829cb9e855d8fa1763a37606d7dc2f4d3107c44ae73dcb204798308e5336a51fbd05232dacb16b9 Jun 25 16:19:28.860646 unknown[1157]: fetched base config from "system" Jun 25 16:19:28.860846 unknown[1157]: fetched base config from "system" Jun 25 16:19:28.861206 unknown[1157]: fetched user config from "aws" Jun 25 16:19:28.864961 ignition[1157]: fetch: fetch complete Jun 25 16:19:28.864971 ignition[1157]: fetch: fetch passed Jun 25 16:19:28.865051 ignition[1157]: Ignition finished successfully Jun 25 16:19:28.873475 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 16:19:28.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.886294 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:19:28.904116 ignition[1163]: Ignition 2.15.0 Jun 25 16:19:28.904131 ignition[1163]: Stage: kargs Jun 25 16:19:28.904397 ignition[1163]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:19:28.904407 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:19:28.904493 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:19:28.907333 ignition[1163]: PUT result: OK Jun 25 16:19:28.912875 ignition[1163]: kargs: kargs passed Jun 25 16:19:28.914549 ignition[1163]: Ignition finished successfully Jun 25 16:19:28.916485 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:19:28.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.927042 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:19:28.950440 ignition[1169]: Ignition 2.15.0 Jun 25 16:19:28.950457 ignition[1169]: Stage: disks Jun 25 16:19:28.950900 ignition[1169]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:19:28.950910 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:19:28.951588 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:19:28.958237 ignition[1169]: PUT result: OK Jun 25 16:19:28.969775 ignition[1169]: disks: disks passed Jun 25 16:19:28.969958 ignition[1169]: Ignition finished successfully Jun 25 16:19:28.972293 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:19:28.980730 kernel: kauditd_printk_skb: 21 callbacks suppressed Jun 25 16:19:28.980843 kernel: audit: type=1130 audit(1719332368.976:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:28.977476 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:19:28.982556 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:19:28.985190 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:19:28.989557 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:19:28.990750 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:19:28.999381 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:19:29.051360 systemd-fsck[1177]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 16:19:29.058341 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:19:29.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:29.067092 kernel: audit: type=1130 audit(1719332369.061:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:29.068308 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:19:29.250343 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:19:29.251288 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:19:29.253165 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:19:29.287485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:19:29.290959 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:19:29.294829 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:19:29.295003 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:19:29.295037 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:19:29.304250 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:19:29.309858 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:19:29.323097 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1194) Jun 25 16:19:29.327571 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:19:29.327640 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:19:29.327661 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 16:19:29.333091 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 16:19:29.335032 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:19:29.726243 initrd-setup-root[1218]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:19:29.755753 initrd-setup-root[1225]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:19:29.769799 initrd-setup-root[1232]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:19:29.788596 initrd-setup-root[1239]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:19:29.820298 systemd-networkd[1133]: eth0: Gained IPv6LL Jun 25 16:19:30.110973 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:19:30.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:30.116079 kernel: audit: type=1130 audit(1719332370.111:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:30.118265 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:19:30.120654 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:19:30.132307 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:19:30.133953 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:19:30.187791 ignition[1305]: INFO : Ignition 2.15.0 Jun 25 16:19:30.187791 ignition[1305]: INFO : Stage: mount Jun 25 16:19:30.191235 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:19:30.191235 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:19:30.191235 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:19:30.198305 ignition[1305]: INFO : PUT result: OK Jun 25 16:19:30.203851 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:19:30.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:30.208950 ignition[1305]: INFO : mount: mount passed Jun 25 16:19:30.212827 kernel: audit: type=1130 audit(1719332370.204:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:30.212867 ignition[1305]: INFO : Ignition finished successfully Jun 25 16:19:30.215523 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:19:30.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:30.221084 kernel: audit: type=1130 audit(1719332370.214:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:30.223292 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:19:30.266467 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:19:30.305346 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1315) Jun 25 16:19:30.311502 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:19:30.311574 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:19:30.311592 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 16:19:30.317090 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 16:19:30.320452 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:19:30.348745 ignition[1333]: INFO : Ignition 2.15.0 Jun 25 16:19:30.348745 ignition[1333]: INFO : Stage: files Jun 25 16:19:30.351492 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:19:30.351492 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:19:30.351492 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:19:30.355658 ignition[1333]: INFO : PUT result: OK Jun 25 16:19:30.361261 ignition[1333]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:19:30.363763 ignition[1333]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:19:30.365605 ignition[1333]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:19:30.384089 ignition[1333]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:19:30.386623 ignition[1333]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:19:30.388226 ignition[1333]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:19:30.387077 unknown[1333]: wrote ssh authorized keys file for user: core Jun 25 16:19:30.398138 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 16:19:30.401387 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 16:19:30.401387 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:19:30.401387 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:19:30.461957 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 16:19:30.652010 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:19:30.652010 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:19:30.655941 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:19:30.995401 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 16:19:31.505982 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:19:31.505982 ignition[1333]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 16:19:31.517587 ignition[1333]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 16:19:31.520601 ignition[1333]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 16:19:31.520601 ignition[1333]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 16:19:31.520601 ignition[1333]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 16:19:31.520601 ignition[1333]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:19:31.520601 ignition[1333]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:19:31.520601 ignition[1333]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 16:19:31.520601 ignition[1333]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:19:31.520601 ignition[1333]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:19:31.549497 kernel: audit: type=1130 audit(1719332371.541:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.528560 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:19:31.551197 ignition[1333]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:19:31.551197 ignition[1333]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:19:31.551197 ignition[1333]: INFO : files: files passed Jun 25 16:19:31.551197 ignition[1333]: INFO : Ignition finished successfully Jun 25 16:19:31.548353 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:19:31.568330 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:19:31.576730 kernel: audit: type=1130 audit(1719332371.570:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.576757 kernel: audit: type=1131 audit(1719332371.570:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.570131 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:19:31.570227 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:19:31.583404 initrd-setup-root-after-ignition[1359]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:19:31.583404 initrd-setup-root-after-ignition[1359]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:19:31.587038 initrd-setup-root-after-ignition[1363]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:19:31.590475 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:19:31.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.590648 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:19:31.596901 kernel: audit: type=1130 audit(1719332371.589:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.607294 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:19:31.635348 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:19:31.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.635466 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:19:31.643177 kernel: audit: type=1130 audit(1719332371.635:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.637621 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:19:31.643180 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:19:31.646144 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:19:31.655339 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:19:31.674956 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:19:31.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.678298 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:19:31.698719 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:19:31.701499 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:19:31.704351 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:19:31.706877 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:19:31.708586 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:19:31.709280 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:19:31.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.712457 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:19:31.713726 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:19:31.713823 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:19:31.713905 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:19:31.714440 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:19:31.715718 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:19:31.715817 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:19:31.715908 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:19:31.715990 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:19:31.716189 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:19:31.716260 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:19:31.716374 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:19:31.764498 iscsid[1138]: iscsid shutting down. Jun 25 16:19:31.719360 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:19:31.719430 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:19:31.719606 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:19:31.719765 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:19:31.719851 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:19:31.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.719950 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:19:31.720026 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:19:31.759391 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:19:31.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.761605 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:19:31.769176 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:19:31.773244 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:19:31.773484 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:19:31.777643 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:19:31.777800 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:19:31.781092 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:19:31.781243 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:19:31.785687 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:19:31.789998 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:19:31.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.803507 ignition[1377]: INFO : Ignition 2.15.0 Jun 25 16:19:31.803507 ignition[1377]: INFO : Stage: umount Jun 25 16:19:31.803507 ignition[1377]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:19:31.803507 ignition[1377]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 16:19:31.803507 ignition[1377]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 16:19:31.790129 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:19:31.813476 ignition[1377]: INFO : PUT result: OK Jun 25 16:19:31.813476 ignition[1377]: INFO : umount: umount passed Jun 25 16:19:31.813476 ignition[1377]: INFO : Ignition finished successfully Jun 25 16:19:31.792777 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:19:31.792910 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:19:31.809577 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:19:31.809699 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:19:31.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.822349 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:19:31.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.822412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:19:31.823745 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:19:31.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.823792 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:19:31.824915 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 16:19:31.824970 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 16:19:31.829210 systemd[1]: Stopped target network.target - Network. Jun 25 16:19:31.830251 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:19:31.830330 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:19:31.831791 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:19:31.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.840586 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:19:31.840672 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:19:31.842134 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:19:31.844312 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:19:31.845540 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:19:31.845601 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:19:31.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.847768 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:19:31.847822 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:19:31.850031 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:19:31.850096 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:19:31.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.853578 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:19:31.879000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:19:31.854815 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:19:31.859180 systemd-networkd[1133]: eth0: DHCPv6 lease lost Jun 25 16:19:31.883000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:19:31.872083 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:19:31.872596 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:19:31.872706 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:19:31.877581 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:19:31.878573 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:19:31.882301 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:19:31.882338 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:19:31.900645 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:19:31.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.904729 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:19:31.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.906383 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:19:31.907724 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:19:31.907798 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:19:31.910012 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:19:31.910204 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:19:31.910345 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:19:31.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.910384 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:19:31.914279 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:19:31.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.920555 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:19:31.920652 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:19:31.928672 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:19:31.928827 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:19:31.930570 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:19:31.930660 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:19:31.940462 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:19:31.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.941744 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:19:31.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.941852 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:19:31.941883 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:19:31.945022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:19:31.945089 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:19:31.947194 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:19:31.947232 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:19:31.948456 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:19:31.948493 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:19:31.973432 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:19:31.977301 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:19:31.978966 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:19:31.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.981521 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:19:31.981591 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:19:31.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.984198 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:19:31.984263 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:19:31.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.989500 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:19:31.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.991230 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:19:31.991582 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:19:31.992025 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:19:31.992160 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:19:31.999009 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:19:32.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:32.001894 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:19:32.005657 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:19:32.024374 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:19:32.064123 systemd[1]: Switching root. Jun 25 16:19:32.071000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:19:32.071000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:19:32.073000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:19:32.073000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:19:32.073000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:19:32.097851 systemd-journald[180]: Journal stopped Jun 25 16:19:34.210030 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Jun 25 16:19:34.220927 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:19:34.220966 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:19:34.220987 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:19:34.221012 kernel: SELinux: policy capability open_perms=1 Jun 25 16:19:34.221032 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:19:34.221084 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:19:34.221112 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:19:34.221131 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:19:34.221150 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:19:34.221170 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:19:34.221191 systemd[1]: Successfully loaded SELinux policy in 71.262ms. Jun 25 16:19:34.221220 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.428ms. Jun 25 16:19:34.221243 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:19:34.221265 systemd[1]: Detected virtualization amazon. Jun 25 16:19:34.221298 systemd[1]: Detected architecture x86-64. Jun 25 16:19:34.221319 systemd[1]: Detected first boot. Jun 25 16:19:34.221344 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:19:34.221366 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:19:34.221386 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:19:34.221408 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 25 16:19:34.221439 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:19:34.221464 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:19:34.221485 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:19:34.221505 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:19:34.221527 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:19:34.221549 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:19:34.221571 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:19:34.221592 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:19:34.221613 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:19:34.221634 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:19:34.221657 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:19:34.221679 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:19:34.221701 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:19:34.221722 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:19:34.221743 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:19:34.221764 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:19:34.221785 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:19:34.221805 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:19:34.221826 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:19:34.221850 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:19:34.221871 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:19:34.221894 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:19:34.221914 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:19:34.221990 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:19:34.222012 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:19:34.222033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:19:34.222066 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:19:34.222093 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:19:34.222114 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:19:34.222134 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:19:34.222156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:19:34.222178 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:19:34.222198 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:19:34.222219 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:19:34.222240 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:19:34.222263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:19:34.222285 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:19:34.222305 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:19:34.222326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:19:34.222347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:19:34.222368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:19:34.222388 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:19:34.222410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:19:34.222433 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:19:34.222457 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 16:19:34.222485 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jun 25 16:19:34.222506 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:19:34.222527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:19:34.222548 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:19:34.222569 kernel: fuse: init (API version 7.37) Jun 25 16:19:34.222589 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:19:34.222610 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:19:34.222633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:19:34.222656 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:19:34.222678 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:19:34.222700 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:19:34.222721 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:19:34.222741 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:19:34.222762 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:19:34.222784 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:19:34.222804 kernel: kauditd_printk_skb: 48 callbacks suppressed Jun 25 16:19:34.222827 kernel: audit: type=1130 audit(1719332374.085:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.222848 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:19:34.222870 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:19:34.222891 kernel: audit: type=1130 audit(1719332374.095:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.222911 kernel: audit: type=1131 audit(1719332374.098:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.222931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:19:34.222952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:19:34.222976 kernel: audit: type=1130 audit(1719332374.112:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.222995 kernel: loop: module loaded Jun 25 16:19:34.223014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:19:34.223033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:19:34.223066 kernel: audit: type=1131 audit(1719332374.116:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.223086 kernel: audit: type=1130 audit(1719332374.132:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.223106 kernel: audit: type=1131 audit(1719332374.132:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.223129 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:19:34.223150 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:19:34.223170 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:19:34.223191 kernel: audit: type=1130 audit(1719332374.144:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.223211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:19:34.223232 kernel: audit: type=1131 audit(1719332374.144:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.223252 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:19:34.223273 kernel: audit: type=1130 audit(1719332374.160:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.223296 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:19:34.223320 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:19:34.223341 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:19:34.223362 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:19:34.223383 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:19:34.223404 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:19:34.223427 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:19:34.223449 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:19:34.223477 systemd-journald[1514]: Journal started Jun 25 16:19:34.223555 systemd-journald[1514]: Runtime Journal (/run/log/journal/ec2cacf7f55889f6096240e4f7d51b3f) is 4.8M, max 38.6M, 33.8M free. Jun 25 16:19:33.821000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jun 25 16:19:34.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.195000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:19:34.195000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcab166670 a2=4000 a3=7ffcab16670c items=0 ppid=1 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:34.195000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:19:34.257900 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:19:34.257964 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:19:34.257989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:19:34.258011 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:19:34.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.255173 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:19:34.266508 kernel: ACPI: bus type drm_connector registered Jun 25 16:19:34.256724 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:19:34.265290 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:19:34.267289 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:19:34.267566 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:19:34.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.276455 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:19:34.277856 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:19:34.300373 systemd-journald[1514]: Time spent on flushing to /var/log/journal/ec2cacf7f55889f6096240e4f7d51b3f is 75.028ms for 1079 entries. Jun 25 16:19:34.300373 systemd-journald[1514]: System Journal (/var/log/journal/ec2cacf7f55889f6096240e4f7d51b3f) is 8.0M, max 195.6M, 187.6M free. Jun 25 16:19:34.381258 systemd-journald[1514]: Received client request to flush runtime journal. Jun 25 16:19:34.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.326054 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:19:34.384166 udevadm[1560]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:19:34.332311 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:19:34.341687 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:19:34.360776 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:19:34.366350 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:19:34.382564 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:19:34.423772 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:19:34.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:34.430409 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:19:34.463738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:19:34.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:35.089567 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:19:35.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:35.101361 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:19:35.142198 systemd-udevd[1574]: Using default interface naming scheme 'v252'. Jun 25 16:19:35.197637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:19:35.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:35.207442 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:19:35.239306 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:19:35.258440 (udev-worker)[1579]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:19:35.356024 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:19:35.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:35.361151 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jun 25 16:19:35.371121 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1586) Jun 25 16:19:35.391516 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:19:35.399047 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 25 16:19:35.446089 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jun 25 16:19:35.485109 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:19:35.491080 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jun 25 16:19:35.505103 kernel: ACPI: button: Sleep Button [SLPF] Jun 25 16:19:35.513543 systemd-networkd[1581]: lo: Link UP Jun 25 16:19:35.513558 systemd-networkd[1581]: lo: Gained carrier Jun 25 16:19:35.514263 systemd-networkd[1581]: Enumeration completed Jun 25 16:19:35.514411 systemd-networkd[1581]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:19:35.514416 systemd-networkd[1581]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:19:35.514427 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:19:35.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:35.519089 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:19:35.519517 systemd-networkd[1581]: eth0: Link UP Jun 25 16:19:35.519688 systemd-networkd[1581]: eth0: Gained carrier Jun 25 16:19:35.519716 systemd-networkd[1581]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:19:35.520263 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:19:35.530218 systemd-networkd[1581]: eth0: DHCPv4 address 172.31.30.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 16:19:35.546090 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1584) Jun 25 16:19:35.562085 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:19:35.747033 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 16:19:35.760715 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:19:35.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:35.766415 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:19:35.790042 lvm[1690]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:19:35.821505 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:19:35.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:35.823471 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:19:35.829281 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:19:35.836100 lvm[1693]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:19:35.864608 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:19:35.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:35.866308 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:19:35.867619 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:19:35.867648 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:19:35.869085 systemd[1]: Reached target machines.target - Containers. Jun 25 16:19:35.885675 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:19:35.887303 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:19:35.887372 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:19:35.888931 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:19:35.898269 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:19:35.901303 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:19:35.904557 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:19:35.906791 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1697 (bootctl) Jun 25 16:19:35.908291 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:19:35.916082 kernel: loop0: detected capacity change from 0 to 139360 Jun 25 16:19:35.928759 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:19:35.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:36.018086 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:19:36.038082 kernel: loop1: detected capacity change from 0 to 60984 Jun 25 16:19:36.098952 systemd-fsck[1705]: fsck.fat 4.2 (2021-01-31) Jun 25 16:19:36.098952 systemd-fsck[1705]: /dev/nvme0n1p1: 808 files, 120378/258078 clusters Jun 25 16:19:36.102545 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:19:36.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:36.109181 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:19:36.131827 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:19:36.142079 kernel: loop2: detected capacity change from 0 to 209816 Jun 25 16:19:36.170486 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:19:36.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:36.304084 kernel: loop3: detected capacity change from 0 to 80584 Jun 25 16:19:36.490095 kernel: loop4: detected capacity change from 0 to 139360 Jun 25 16:19:36.525043 kernel: loop5: detected capacity change from 0 to 60984 Jun 25 16:19:36.556087 kernel: loop6: detected capacity change from 0 to 209816 Jun 25 16:19:36.593089 kernel: loop7: detected capacity change from 0 to 80584 Jun 25 16:19:36.631295 (sd-sysext)[1728]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 25 16:19:36.631863 (sd-sysext)[1728]: Merged extensions into '/usr'. Jun 25 16:19:36.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:36.634080 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:19:36.643339 systemd[1]: Starting ensure-sysext.service... Jun 25 16:19:36.648845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:19:36.676456 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:19:36.682740 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:19:36.683378 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:19:36.692927 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:19:36.696672 systemd[1]: Reloading. Jun 25 16:19:37.154279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:19:37.165488 ldconfig[1696]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:19:37.280902 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:19:37.296941 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:19:37.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.299281 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:19:37.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.303966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:19:37.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.311104 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:19:37.322334 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:19:37.326378 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:19:37.332126 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:19:37.343648 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:19:37.353419 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:19:37.368355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:19:37.372016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:19:37.373294 systemd-networkd[1581]: eth0: Gained IPv6LL Jun 25 16:19:37.377330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:19:37.383618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:19:37.389854 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:19:37.394706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:19:37.395053 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:19:37.395549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:19:37.414616 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:19:37.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.416900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:19:37.417367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:19:37.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.424137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:19:37.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.424360 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:19:37.426275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:19:37.426712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:19:37.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.435729 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:19:37.438388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:19:37.438584 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:19:37.438736 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:19:37.438850 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:19:37.441521 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:19:37.441826 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:19:37.457999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:19:37.458822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:19:37.468601 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:19:37.477116 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:19:37.482000 audit[1824]: SYSTEM_BOOT pid=1824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.490957 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:19:37.492821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:19:37.493051 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:19:37.493322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:19:37.499208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:19:37.499466 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:19:37.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.514021 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:19:37.514329 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:19:37.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.516730 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:19:37.517207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:19:37.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.520422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:19:37.532309 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:19:37.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.540581 systemd[1]: Finished ensure-sysext.service. Jun 25 16:19:37.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.565984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:19:37.566256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:19:37.567952 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:19:37.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.583128 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:19:37.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.590353 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:19:37.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.611975 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:19:37.613719 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:19:37.641860 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:19:37.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:37.654000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:19:37.654000 audit[1855]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5a95ce00 a2=420 a3=0 items=0 ppid=1813 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:37.654000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:19:37.656882 augenrules[1855]: No rules Jun 25 16:19:37.657787 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:19:37.697930 systemd-resolved[1819]: Positive Trust Anchors: Jun 25 16:19:37.697952 systemd-resolved[1819]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:19:37.697994 systemd-resolved[1819]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:19:37.715116 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:19:37.716865 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:19:37.719569 systemd-resolved[1819]: Defaulting to hostname 'linux'. Jun 25 16:19:37.721794 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:19:37.723471 systemd[1]: Reached target network.target - Network. Jun 25 16:19:37.725390 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:19:37.727862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:19:37.728992 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:19:37.730270 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:19:37.732702 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:19:37.734436 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:19:37.735896 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:19:37.737041 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:19:37.738382 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:19:37.738424 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:19:37.739426 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:19:37.741341 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:19:37.745007 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:19:37.747512 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:19:37.748939 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:19:37.749486 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:19:37.751178 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:19:37.752614 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:19:37.754286 systemd[1]: System is tainted: cgroupsv1 Jun 25 16:19:37.754354 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:19:37.754383 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:19:37.762299 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:19:37.767499 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 16:19:37.773135 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:19:37.798212 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:19:37.813098 jq[1869]: false Jun 25 16:19:37.824220 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:19:37.825914 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:19:37.836314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:19:37.839938 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:19:37.844111 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:19:37.847953 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:19:37.859376 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 16:19:37.863815 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:19:37.874334 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:19:37.882723 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:19:37.886256 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:19:37.886342 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:19:37.893342 extend-filesystems[1870]: Found loop4 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found loop5 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found loop6 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found loop7 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found nvme0n1 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found nvme0n1p1 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found nvme0n1p2 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found nvme0n1p3 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found usr Jun 25 16:19:37.897453 extend-filesystems[1870]: Found nvme0n1p4 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found nvme0n1p6 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found nvme0n1p7 Jun 25 16:19:37.897453 extend-filesystems[1870]: Found nvme0n1p9 Jun 25 16:19:37.897453 extend-filesystems[1870]: Checking size of /dev/nvme0n1p9 Jun 25 16:19:37.897276 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:19:37.901450 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:19:37.907958 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:19:37.908373 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:19:37.968088 update_engine[1886]: I0625 16:19:37.956034 1886 main.cc:92] Flatcar Update Engine starting Jun 25 16:19:37.978497 systemd-timesyncd[1820]: Contacted time server 71.162.136.44:123 (0.flatcar.pool.ntp.org). Jun 25 16:19:37.984999 systemd-timesyncd[1820]: Initial clock synchronization to Tue 2024-06-25 16:19:38.231596 UTC. Jun 25 16:19:37.994431 jq[1888]: true Jun 25 16:19:38.008705 jq[1899]: true Jun 25 16:19:38.021046 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:19:38.021594 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:19:38.067136 extend-filesystems[1870]: Resized partition /dev/nvme0n1p9 Jun 25 16:19:38.089799 tar[1891]: linux-amd64/helm Jun 25 16:19:38.085298 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 16:19:38.091324 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 25 16:19:38.127777 dbus-daemon[1868]: [system] SELinux support is enabled Jun 25 16:19:38.132171 extend-filesystems[1920]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:19:38.133757 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:19:38.147791 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:19:38.147829 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:19:38.150330 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:19:38.150366 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:19:38.171120 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 25 16:19:38.194476 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:19:38.194941 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:19:38.226119 systemd-logind[1885]: Watching system buttons on /dev/input/event2 (Power Button) Jun 25 16:19:38.226151 systemd-logind[1885]: Watching system buttons on /dev/input/event3 (Sleep Button) Jun 25 16:19:38.226290 systemd-logind[1885]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:19:38.235473 dbus-daemon[1868]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1581 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 16:19:38.235518 systemd-logind[1885]: New seat seat0. Jun 25 16:19:38.246929 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 16:19:38.273113 update_engine[1886]: I0625 16:19:38.272643 1886 update_check_scheduler.cc:74] Next update check in 3m48s Jun 25 16:19:38.273846 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:19:38.277038 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:19:38.281432 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:19:38.296689 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:19:38.299893 amazon-ssm-agent[1923]: Initializing new seelog logger Jun 25 16:19:38.300390 amazon-ssm-agent[1923]: New Seelog Logger Creation Complete Jun 25 16:19:38.300390 amazon-ssm-agent[1923]: 2024/06/25 16:19:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:19:38.300390 amazon-ssm-agent[1923]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:19:38.300817 amazon-ssm-agent[1923]: 2024/06/25 16:19:38 processing appconfig overrides Jun 25 16:19:38.302538 amazon-ssm-agent[1923]: 2024/06/25 16:19:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:19:38.302538 amazon-ssm-agent[1923]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:19:38.302730 amazon-ssm-agent[1923]: 2024/06/25 16:19:38 processing appconfig overrides Jun 25 16:19:38.303165 amazon-ssm-agent[1923]: 2024/06/25 16:19:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:19:38.303165 amazon-ssm-agent[1923]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:19:38.303393 amazon-ssm-agent[1923]: 2024/06/25 16:19:38 processing appconfig overrides Jun 25 16:19:38.304000 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO Proxy environment variables: Jun 25 16:19:38.324823 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:19:38.354657 amazon-ssm-agent[1923]: 2024/06/25 16:19:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:19:38.354657 amazon-ssm-agent[1923]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 16:19:38.354657 amazon-ssm-agent[1923]: 2024/06/25 16:19:38 processing appconfig overrides Jun 25 16:19:38.404452 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO https_proxy: Jun 25 16:19:38.444115 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 25 16:19:38.513547 extend-filesystems[1920]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 25 16:19:38.513547 extend-filesystems[1920]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 16:19:38.513547 extend-filesystems[1920]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 25 16:19:38.527002 extend-filesystems[1870]: Resized filesystem in /dev/nvme0n1p9 Jun 25 16:19:38.530730 bash[1948]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:19:38.517319 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:19:38.517662 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:19:38.526772 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:19:38.532839 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO http_proxy: Jun 25 16:19:38.537549 systemd[1]: Starting sshkeys.service... Jun 25 16:19:38.575462 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 16:19:38.584885 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 16:19:38.612624 dbus-daemon[1868]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 16:19:38.613141 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 16:19:38.631584 dbus-daemon[1868]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1942 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 16:19:38.632988 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO no_proxy: Jun 25 16:19:38.640900 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 16:19:38.671113 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1971) Jun 25 16:19:38.687028 polkitd[1977]: Started polkitd version 121 Jun 25 16:19:38.716917 polkitd[1977]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 16:19:38.717789 polkitd[1977]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 16:19:38.725370 polkitd[1977]: Finished loading, compiling and executing 2 rules Jun 25 16:19:38.737762 dbus-daemon[1868]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 16:19:38.737977 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 16:19:38.740723 polkitd[1977]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 16:19:38.750678 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO Checking if agent identity type OnPrem can be assumed Jun 25 16:19:38.847197 systemd-hostnamed[1942]: Hostname set to (transient) Jun 25 16:19:38.847917 systemd-resolved[1819]: System hostname changed to 'ip-172-31-30-52'. Jun 25 16:19:38.852006 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO Checking if agent identity type EC2 can be assumed Jun 25 16:19:38.955606 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO Agent will take identity from EC2 Jun 25 16:19:38.982196 locksmithd[1947]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:19:39.085219 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 16:19:39.092315 coreos-metadata[1867]: Jun 25 16:19:39.087 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 16:19:39.103752 coreos-metadata[1867]: Jun 25 16:19:39.103 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 25 16:19:39.106190 coreos-metadata[1867]: Jun 25 16:19:39.106 INFO Fetch successful Jun 25 16:19:39.106353 coreos-metadata[1867]: Jun 25 16:19:39.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 25 16:19:39.107112 coreos-metadata[1867]: Jun 25 16:19:39.107 INFO Fetch successful Jun 25 16:19:39.107264 coreos-metadata[1867]: Jun 25 16:19:39.107 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 25 16:19:39.109960 coreos-metadata[1867]: Jun 25 16:19:39.109 INFO Fetch successful Jun 25 16:19:39.110123 coreos-metadata[1867]: Jun 25 16:19:39.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 25 16:19:39.111877 coreos-metadata[1867]: Jun 25 16:19:39.111 INFO Fetch successful Jun 25 16:19:39.112114 coreos-metadata[1867]: Jun 25 16:19:39.112 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 25 16:19:39.115294 coreos-metadata[1867]: Jun 25 16:19:39.115 INFO Fetch failed with 404: resource not found Jun 25 16:19:39.115456 coreos-metadata[1867]: Jun 25 16:19:39.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 25 16:19:39.125553 coreos-metadata[1867]: Jun 25 16:19:39.125 INFO Fetch successful Jun 25 16:19:39.125779 coreos-metadata[1867]: Jun 25 16:19:39.125 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 25 16:19:39.127451 coreos-metadata[1867]: Jun 25 16:19:39.126 INFO Fetch successful Jun 25 16:19:39.127654 coreos-metadata[1867]: Jun 25 16:19:39.127 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 25 16:19:39.129313 coreos-metadata[1867]: Jun 25 16:19:39.129 INFO Fetch successful Jun 25 16:19:39.129449 coreos-metadata[1867]: Jun 25 16:19:39.129 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 25 16:19:39.134339 coreos-metadata[1867]: Jun 25 16:19:39.134 INFO Fetch successful Jun 25 16:19:39.134489 coreos-metadata[1867]: Jun 25 16:19:39.134 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 25 16:19:39.137512 coreos-metadata[1867]: Jun 25 16:19:39.137 INFO Fetch successful Jun 25 16:19:39.177867 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 16:19:39.181450 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:19:39.189204 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 16:19:39.288724 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 16:19:39.397012 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 25 16:19:39.493766 containerd[1894]: time="2024-06-25T16:19:39.493650700Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:19:39.504619 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jun 25 16:19:39.574538 coreos-metadata[1965]: Jun 25 16:19:39.574 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 16:19:39.579524 coreos-metadata[1965]: Jun 25 16:19:39.579 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 25 16:19:39.580498 coreos-metadata[1965]: Jun 25 16:19:39.580 INFO Fetch successful Jun 25 16:19:39.580498 coreos-metadata[1965]: Jun 25 16:19:39.580 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 16:19:39.581147 coreos-metadata[1965]: Jun 25 16:19:39.581 INFO Fetch successful Jun 25 16:19:39.584472 unknown[1965]: wrote ssh authorized keys file for user: core Jun 25 16:19:39.651468 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO [amazon-ssm-agent] Starting Core Agent Jun 25 16:19:39.674368 update-ssh-keys[2059]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:19:39.675959 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 16:19:39.681334 systemd[1]: Finished sshkeys.service. Jun 25 16:19:39.751607 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 25 16:19:39.873338 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO [Registrar] Starting registrar module Jun 25 16:19:39.886868 containerd[1894]: time="2024-06-25T16:19:39.886818335Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:19:39.891336 containerd[1894]: time="2024-06-25T16:19:39.891248757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:19:39.910283 containerd[1894]: time="2024-06-25T16:19:39.909955576Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:19:39.910283 containerd[1894]: time="2024-06-25T16:19:39.910262070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:19:39.911543 containerd[1894]: time="2024-06-25T16:19:39.911066613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:19:39.911543 containerd[1894]: time="2024-06-25T16:19:39.911106441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:19:39.911543 containerd[1894]: time="2024-06-25T16:19:39.911263810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:19:39.911543 containerd[1894]: time="2024-06-25T16:19:39.911348600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:19:39.911543 containerd[1894]: time="2024-06-25T16:19:39.911385348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:19:39.911543 containerd[1894]: time="2024-06-25T16:19:39.911517534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:19:39.911922 containerd[1894]: time="2024-06-25T16:19:39.911895880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:19:39.911981 containerd[1894]: time="2024-06-25T16:19:39.911929239Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:19:39.911981 containerd[1894]: time="2024-06-25T16:19:39.911945644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:19:39.912406 containerd[1894]: time="2024-06-25T16:19:39.912377864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:19:39.912476 containerd[1894]: time="2024-06-25T16:19:39.912407627Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:19:39.913333 containerd[1894]: time="2024-06-25T16:19:39.912499815Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:19:39.913333 containerd[1894]: time="2024-06-25T16:19:39.912538888Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:19:39.929287 containerd[1894]: time="2024-06-25T16:19:39.929236957Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:19:39.929287 containerd[1894]: time="2024-06-25T16:19:39.929292991Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:19:39.929634 containerd[1894]: time="2024-06-25T16:19:39.929313064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:19:39.929634 containerd[1894]: time="2024-06-25T16:19:39.929380571Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:19:39.929634 containerd[1894]: time="2024-06-25T16:19:39.929402368Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:19:39.929634 containerd[1894]: time="2024-06-25T16:19:39.929417134Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:19:39.930094 containerd[1894]: time="2024-06-25T16:19:39.929640720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:19:39.930153 containerd[1894]: time="2024-06-25T16:19:39.930105942Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:19:39.930153 containerd[1894]: time="2024-06-25T16:19:39.930135724Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:19:39.930252 containerd[1894]: time="2024-06-25T16:19:39.930158606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:19:39.930252 containerd[1894]: time="2024-06-25T16:19:39.930180831Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:19:39.930252 containerd[1894]: time="2024-06-25T16:19:39.930203896Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:19:39.930252 containerd[1894]: time="2024-06-25T16:19:39.930229840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:19:39.930399 containerd[1894]: time="2024-06-25T16:19:39.930250882Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:19:39.930399 containerd[1894]: time="2024-06-25T16:19:39.930272839Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:19:39.930399 containerd[1894]: time="2024-06-25T16:19:39.930301413Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:19:39.930399 containerd[1894]: time="2024-06-25T16:19:39.930322405Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:19:39.930399 containerd[1894]: time="2024-06-25T16:19:39.930342515Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:19:39.930399 containerd[1894]: time="2024-06-25T16:19:39.930361147Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:19:39.931021 containerd[1894]: time="2024-06-25T16:19:39.930908430Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:19:39.931682 containerd[1894]: time="2024-06-25T16:19:39.931546085Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:19:39.931757 containerd[1894]: time="2024-06-25T16:19:39.931705642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.931757 containerd[1894]: time="2024-06-25T16:19:39.931733673Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:19:39.931857 containerd[1894]: time="2024-06-25T16:19:39.931768232Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932114496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932158776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932189087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932215245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932242249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932372460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932399120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932446498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932666656Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932842529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932876266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932904264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932931313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.932960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.933385 containerd[1894]: time="2024-06-25T16:19:39.933077828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.934040 containerd[1894]: time="2024-06-25T16:19:39.933408003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.934040 containerd[1894]: time="2024-06-25T16:19:39.933434216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:19:39.935831 containerd[1894]: time="2024-06-25T16:19:39.934284827Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:19:39.936137 containerd[1894]: time="2024-06-25T16:19:39.935847248Z" level=info msg="Connect containerd service" Jun 25 16:19:39.936137 containerd[1894]: time="2024-06-25T16:19:39.935912297Z" level=info msg="using legacy CRI server" Jun 25 16:19:39.936137 containerd[1894]: time="2024-06-25T16:19:39.935925135Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:19:39.936137 containerd[1894]: time="2024-06-25T16:19:39.935990043Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:19:39.944852 containerd[1894]: time="2024-06-25T16:19:39.944267072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:19:39.944852 containerd[1894]: time="2024-06-25T16:19:39.944360634Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:19:39.944852 containerd[1894]: time="2024-06-25T16:19:39.944389751Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:19:39.944852 containerd[1894]: time="2024-06-25T16:19:39.944407229Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:19:39.944852 containerd[1894]: time="2024-06-25T16:19:39.944424253Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:19:39.944852 containerd[1894]: time="2024-06-25T16:19:39.944853657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:19:39.945337 containerd[1894]: time="2024-06-25T16:19:39.944967367Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:19:39.945337 containerd[1894]: time="2024-06-25T16:19:39.945031124Z" level=info msg="Start subscribing containerd event" Jun 25 16:19:39.945337 containerd[1894]: time="2024-06-25T16:19:39.945079320Z" level=info msg="Start recovering state" Jun 25 16:19:39.945337 containerd[1894]: time="2024-06-25T16:19:39.945239236Z" level=info msg="Start event monitor" Jun 25 16:19:39.945337 containerd[1894]: time="2024-06-25T16:19:39.945268905Z" level=info msg="Start snapshots syncer" Jun 25 16:19:39.945337 containerd[1894]: time="2024-06-25T16:19:39.945283353Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:19:39.945337 containerd[1894]: time="2024-06-25T16:19:39.945295348Z" level=info msg="Start streaming server" Jun 25 16:19:39.945505 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:19:39.946032 containerd[1894]: time="2024-06-25T16:19:39.945637247Z" level=info msg="containerd successfully booted in 0.459751s" Jun 25 16:19:39.973912 amazon-ssm-agent[1923]: 2024-06-25 16:19:38 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 25 16:19:40.854902 amazon-ssm-agent[1923]: 2024-06-25 16:19:40 INFO [EC2Identity] EC2 registration was successful. Jun 25 16:19:40.886089 amazon-ssm-agent[1923]: 2024-06-25 16:19:40 INFO [CredentialRefresher] credentialRefresher has started Jun 25 16:19:40.886089 amazon-ssm-agent[1923]: 2024-06-25 16:19:40 INFO [CredentialRefresher] Starting credentials refresher loop Jun 25 16:19:40.886089 amazon-ssm-agent[1923]: 2024-06-25 16:19:40 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 25 16:19:40.921712 tar[1891]: linux-amd64/LICENSE Jun 25 16:19:40.921712 tar[1891]: linux-amd64/README.md Jun 25 16:19:40.937753 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:19:40.955149 amazon-ssm-agent[1923]: 2024-06-25 16:19:40 INFO [CredentialRefresher] Next credential rotation will be in 30.4916617395 minutes Jun 25 16:19:40.965180 sshd_keygen[1915]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:19:41.004788 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:19:41.012871 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:19:41.025005 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:19:41.025385 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:19:41.036540 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:19:41.054649 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:19:41.064605 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:19:41.068796 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:19:41.070696 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:19:41.116608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:41.118316 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:19:41.121657 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:19:41.132944 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:19:41.133220 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:19:41.136854 systemd[1]: Startup finished in 8.698s (kernel) + 8.579s (userspace) = 17.277s. Jun 25 16:19:41.912354 amazon-ssm-agent[1923]: 2024-06-25 16:19:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 25 16:19:42.013471 amazon-ssm-agent[1923]: 2024-06-25 16:19:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2135) started Jun 25 16:19:42.097199 kubelet[2125]: E0625 16:19:42.097094 2125 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:19:42.100572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:19:42.100893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:19:42.113776 amazon-ssm-agent[1923]: 2024-06-25 16:19:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 25 16:19:46.639017 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:19:46.649703 systemd[1]: Started sshd@0-172.31.30.52:22-139.178.89.65:36726.service - OpenSSH per-connection server daemon (139.178.89.65:36726). Jun 25 16:19:46.863944 sshd[2147]: Accepted publickey for core from 139.178.89.65 port 36726 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:19:46.869423 sshd[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:19:46.895586 systemd-logind[1885]: New session 1 of user core. Jun 25 16:19:46.896809 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:19:46.908055 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:19:46.941250 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:19:46.949666 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:19:46.953756 (systemd)[2152]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:19:47.077367 systemd[2152]: Queued start job for default target default.target. Jun 25 16:19:47.077734 systemd[2152]: Reached target paths.target - Paths. Jun 25 16:19:47.077757 systemd[2152]: Reached target sockets.target - Sockets. Jun 25 16:19:47.077774 systemd[2152]: Reached target timers.target - Timers. Jun 25 16:19:47.077790 systemd[2152]: Reached target basic.target - Basic System. Jun 25 16:19:47.077846 systemd[2152]: Reached target default.target - Main User Target. Jun 25 16:19:47.077884 systemd[2152]: Startup finished in 114ms. Jun 25 16:19:47.078712 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:19:47.085391 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:19:47.232569 systemd[1]: Started sshd@1-172.31.30.52:22-139.178.89.65:36732.service - OpenSSH per-connection server daemon (139.178.89.65:36732). Jun 25 16:19:47.401087 sshd[2161]: Accepted publickey for core from 139.178.89.65 port 36732 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:19:47.402771 sshd[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:19:47.409053 systemd-logind[1885]: New session 2 of user core. Jun 25 16:19:47.411411 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:19:47.542422 sshd[2161]: pam_unix(sshd:session): session closed for user core Jun 25 16:19:47.546183 systemd[1]: sshd@1-172.31.30.52:22-139.178.89.65:36732.service: Deactivated successfully. Jun 25 16:19:47.547425 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:19:47.549701 systemd-logind[1885]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:19:47.551548 systemd-logind[1885]: Removed session 2. Jun 25 16:19:47.568549 systemd[1]: Started sshd@2-172.31.30.52:22-139.178.89.65:36746.service - OpenSSH per-connection server daemon (139.178.89.65:36746). Jun 25 16:19:47.727223 sshd[2168]: Accepted publickey for core from 139.178.89.65 port 36746 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:19:47.729112 sshd[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:19:47.735131 systemd-logind[1885]: New session 3 of user core. Jun 25 16:19:47.742552 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:19:47.861800 sshd[2168]: pam_unix(sshd:session): session closed for user core Jun 25 16:19:47.867127 systemd-logind[1885]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:19:47.867387 systemd[1]: sshd@2-172.31.30.52:22-139.178.89.65:36746.service: Deactivated successfully. Jun 25 16:19:47.869720 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:19:47.874474 systemd-logind[1885]: Removed session 3. Jun 25 16:19:47.891788 systemd[1]: Started sshd@3-172.31.30.52:22-139.178.89.65:36752.service - OpenSSH per-connection server daemon (139.178.89.65:36752). Jun 25 16:19:48.052295 sshd[2175]: Accepted publickey for core from 139.178.89.65 port 36752 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:19:48.054301 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:19:48.060121 systemd-logind[1885]: New session 4 of user core. Jun 25 16:19:48.067472 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:19:48.197386 sshd[2175]: pam_unix(sshd:session): session closed for user core Jun 25 16:19:48.202231 systemd[1]: sshd@3-172.31.30.52:22-139.178.89.65:36752.service: Deactivated successfully. Jun 25 16:19:48.203561 systemd-logind[1885]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:19:48.203656 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:19:48.205853 systemd-logind[1885]: Removed session 4. Jun 25 16:19:48.228628 systemd[1]: Started sshd@4-172.31.30.52:22-139.178.89.65:36768.service - OpenSSH per-connection server daemon (139.178.89.65:36768). Jun 25 16:19:48.388646 sshd[2182]: Accepted publickey for core from 139.178.89.65 port 36768 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:19:48.391602 sshd[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:19:48.398620 systemd-logind[1885]: New session 5 of user core. Jun 25 16:19:48.400356 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:19:48.534863 sudo[2186]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:19:48.535737 sudo[2186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:19:48.550613 sudo[2186]: pam_unix(sudo:session): session closed for user root Jun 25 16:19:48.574343 sshd[2182]: pam_unix(sshd:session): session closed for user core Jun 25 16:19:48.578521 systemd[1]: sshd@4-172.31.30.52:22-139.178.89.65:36768.service: Deactivated successfully. Jun 25 16:19:48.580360 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:19:48.580973 systemd-logind[1885]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:19:48.582110 systemd-logind[1885]: Removed session 5. Jun 25 16:19:48.604716 systemd[1]: Started sshd@5-172.31.30.52:22-139.178.89.65:36772.service - OpenSSH per-connection server daemon (139.178.89.65:36772). Jun 25 16:19:48.762454 sshd[2190]: Accepted publickey for core from 139.178.89.65 port 36772 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:19:48.764374 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:19:48.774535 systemd-logind[1885]: New session 6 of user core. Jun 25 16:19:48.780513 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:19:48.892195 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:19:48.892553 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:19:48.896691 sudo[2195]: pam_unix(sudo:session): session closed for user root Jun 25 16:19:48.903055 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:19:48.903504 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:19:48.922553 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:19:48.923000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:19:48.925538 auditctl[2198]: No rules Jun 25 16:19:48.930190 kernel: kauditd_printk_skb: 55 callbacks suppressed Jun 25 16:19:48.930273 kernel: audit: type=1305 audit(1719332388.923:151): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:19:48.930303 kernel: audit: type=1300 audit(1719332388.923:151): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc8346bc0 a2=420 a3=0 items=0 ppid=1 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:48.923000 audit[2198]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc8346bc0 a2=420 a3=0 items=0 ppid=1 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:48.930940 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:19:48.931310 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:19:48.923000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:19:48.933996 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:19:48.935162 kernel: audit: type=1327 audit(1719332388.923:151): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:19:48.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:48.945151 kernel: audit: type=1131 audit(1719332388.930:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:48.990751 augenrules[2216]: No rules Jun 25 16:19:48.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:48.993284 sudo[2194]: pam_unix(sudo:session): session closed for user root Jun 25 16:19:48.991000 audit[2194]: USER_END pid=2194 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:48.991682 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:19:48.997278 kernel: audit: type=1130 audit(1719332388.990:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:48.997385 kernel: audit: type=1106 audit(1719332388.991:154): pid=2194 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:48.991000 audit[2194]: CRED_DISP pid=2194 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:49.000198 kernel: audit: type=1104 audit(1719332388.991:155): pid=2194 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:49.019294 sshd[2190]: pam_unix(sshd:session): session closed for user core Jun 25 16:19:49.020000 audit[2190]: USER_END pid=2190 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:19:49.025095 kernel: audit: type=1106 audit(1719332389.020:156): pid=2190 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:19:49.025150 systemd[1]: sshd@5-172.31.30.52:22-139.178.89.65:36772.service: Deactivated successfully. Jun 25 16:19:49.026319 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:19:49.020000 audit[2190]: CRED_DISP pid=2190 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:19:49.028058 systemd-logind[1885]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:19:49.031099 kernel: audit: type=1104 audit(1719332389.020:157): pid=2190 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:19:49.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.30.52:22-139.178.89.65:36772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:49.033830 systemd-logind[1885]: Removed session 6. Jun 25 16:19:49.034187 kernel: audit: type=1131 audit(1719332389.024:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.30.52:22-139.178.89.65:36772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:49.056586 systemd[1]: Started sshd@6-172.31.30.52:22-139.178.89.65:36782.service - OpenSSH per-connection server daemon (139.178.89.65:36782). Jun 25 16:19:49.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.52:22-139.178.89.65:36782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:49.211000 audit[2223]: USER_ACCT pid=2223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:19:49.214542 sshd[2223]: Accepted publickey for core from 139.178.89.65 port 36782 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:19:49.213000 audit[2223]: CRED_ACQ pid=2223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:19:49.213000 audit[2223]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5a73a080 a2=3 a3=7f870d836480 items=0 ppid=1 pid=2223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:49.213000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:19:49.215243 sshd[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:19:49.220746 systemd-logind[1885]: New session 7 of user core. Jun 25 16:19:49.227451 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:19:49.233000 audit[2223]: USER_START pid=2223 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:19:49.235000 audit[2226]: CRED_ACQ pid=2226 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:19:49.326000 audit[2227]: USER_ACCT pid=2227 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:49.327921 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:19:49.327000 audit[2227]: CRED_REFR pid=2227 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:49.328372 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:19:49.329000 audit[2227]: USER_START pid=2227 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:49.549644 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:19:50.146293 dockerd[2237]: time="2024-06-25T16:19:50.146229840Z" level=info msg="Starting up" Jun 25 16:19:50.989945 dockerd[2237]: time="2024-06-25T16:19:50.989869094Z" level=info msg="Loading containers: start." Jun 25 16:19:51.098000 audit[2269]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.098000 audit[2269]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdbfe9e6b0 a2=0 a3=7f9f15b24e90 items=0 ppid=2237 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.098000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:19:51.101000 audit[2271]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.101000 audit[2271]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdc8df4a70 a2=0 a3=7fa1d6cc4e90 items=0 ppid=2237 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.101000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:19:51.103000 audit[2273]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.103000 audit[2273]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffbae72630 a2=0 a3=7fcfa1fd5e90 items=0 ppid=2237 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.103000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:19:51.119000 audit[2275]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.119000 audit[2275]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcb93df830 a2=0 a3=7f643de80e90 items=0 ppid=2237 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.119000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:19:51.124000 audit[2277]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.124000 audit[2277]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcb18bd660 a2=0 a3=7fde67622e90 items=0 ppid=2237 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.124000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:19:51.128000 audit[2279]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.128000 audit[2279]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc66280780 a2=0 a3=7f2e8f5e7e90 items=0 ppid=2237 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.128000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:19:51.162000 audit[2281]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2281 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.162000 audit[2281]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5def1060 a2=0 a3=7f453f1a0e90 items=0 ppid=2237 pid=2281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.162000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:19:51.165000 audit[2283]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2283 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.165000 audit[2283]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcba47a340 a2=0 a3=7ffb45f90e90 items=0 ppid=2237 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.165000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:19:51.167000 audit[2285]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.167000 audit[2285]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcdb35d490 a2=0 a3=7f33ca59ae90 items=0 ppid=2237 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.167000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:19:51.186000 audit[2289]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.186000 audit[2289]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffa6ee7190 a2=0 a3=7f8b2ae50e90 items=0 ppid=2237 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.186000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:19:51.187000 audit[2290]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2290 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.187000 audit[2290]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc255f4dc0 a2=0 a3=7f0bc4dace90 items=0 ppid=2237 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.187000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:19:51.199298 kernel: Initializing XFRM netlink socket Jun 25 16:19:51.232361 (udev-worker)[2249]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:19:51.283000 audit[2298]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.283000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc41ab5690 a2=0 a3=7ff79e496e90 items=0 ppid=2237 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.283000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:19:51.366000 audit[2301]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.366000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff38b9f6f0 a2=0 a3=7fa4504ebe90 items=0 ppid=2237 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.366000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:19:51.373000 audit[2305]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.373000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcee80d4c0 a2=0 a3=7ffb6f1b1e90 items=0 ppid=2237 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.373000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:19:51.375000 audit[2307]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.375000 audit[2307]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffe9b907b0 a2=0 a3=7fe5d52dae90 items=0 ppid=2237 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.375000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:19:51.378000 audit[2309]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.378000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd0c2571e0 a2=0 a3=7f253b2d1e90 items=0 ppid=2237 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.378000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:19:51.381000 audit[2311]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.381000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffff5099c70 a2=0 a3=7f8166844e90 items=0 ppid=2237 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.381000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:19:51.384000 audit[2313]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.384000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd333a1120 a2=0 a3=7fc2a3e25e90 items=0 ppid=2237 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.384000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:19:51.394000 audit[2316]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.394000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff2af3c550 a2=0 a3=7ff5cb131e90 items=0 ppid=2237 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.394000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:19:51.398000 audit[2318]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.398000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe5aa111b0 a2=0 a3=7f324ecf6e90 items=0 ppid=2237 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.398000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:19:51.401000 audit[2320]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.401000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe49169760 a2=0 a3=7ff1b86e5e90 items=0 ppid=2237 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.401000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:19:51.404000 audit[2322]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.404000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff8b8d2320 a2=0 a3=7f9316c6ae90 items=0 ppid=2237 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.404000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:19:51.406448 systemd-networkd[1581]: docker0: Link UP Jun 25 16:19:51.422000 audit[2326]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.422000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc42005ac0 a2=0 a3=7f886aa88e90 items=0 ppid=2237 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.422000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:19:51.424000 audit[2327]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:51.424000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffce9253f60 a2=0 a3=7f1527f44e90 items=0 ppid=2237 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.424000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:19:51.424748 dockerd[2237]: time="2024-06-25T16:19:51.424708422Z" level=info msg="Loading containers: done." Jun 25 16:19:51.610045 dockerd[2237]: time="2024-06-25T16:19:51.609990699Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:19:51.610342 dockerd[2237]: time="2024-06-25T16:19:51.610310257Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:19:51.611194 dockerd[2237]: time="2024-06-25T16:19:51.610447154Z" level=info msg="Daemon has completed initialization" Jun 25 16:19:51.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:51.665862 dockerd[2237]: time="2024-06-25T16:19:51.664822690Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:19:51.664976 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:19:52.212949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:19:52.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:52.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:52.214446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:52.228229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:19:52.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:52.718620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:52.899678 kubelet[2376]: E0625 16:19:52.899617 2376 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:19:52.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:19:52.906715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:19:52.906883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:19:52.911532 containerd[1894]: time="2024-06-25T16:19:52.910957626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:19:53.549976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051718770.mount: Deactivated successfully. Jun 25 16:19:56.213213 containerd[1894]: time="2024-06-25T16:19:56.213113944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:56.215379 containerd[1894]: time="2024-06-25T16:19:56.215329662Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 16:19:56.217434 containerd[1894]: time="2024-06-25T16:19:56.217395669Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:56.221109 containerd[1894]: time="2024-06-25T16:19:56.221075161Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:56.224439 containerd[1894]: time="2024-06-25T16:19:56.224404654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:56.225933 containerd[1894]: time="2024-06-25T16:19:56.225888353Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.314868614s" Jun 25 16:19:56.226026 containerd[1894]: time="2024-06-25T16:19:56.225944535Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:19:56.254737 containerd[1894]: time="2024-06-25T16:19:56.254691396Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:19:58.840785 containerd[1894]: time="2024-06-25T16:19:58.840728981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:58.842671 containerd[1894]: time="2024-06-25T16:19:58.842613640Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 16:19:58.844650 containerd[1894]: time="2024-06-25T16:19:58.844614215Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:58.848464 containerd[1894]: time="2024-06-25T16:19:58.848428375Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:58.851602 containerd[1894]: time="2024-06-25T16:19:58.851051540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:58.853081 containerd[1894]: time="2024-06-25T16:19:58.853016601Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.598280224s" Jun 25 16:19:58.853184 containerd[1894]: time="2024-06-25T16:19:58.853089661Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:19:58.886090 containerd[1894]: time="2024-06-25T16:19:58.886037338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:20:00.728890 containerd[1894]: time="2024-06-25T16:20:00.728787057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:00.731421 containerd[1894]: time="2024-06-25T16:20:00.731359918Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 16:20:00.733477 containerd[1894]: time="2024-06-25T16:20:00.733437453Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:00.736504 containerd[1894]: time="2024-06-25T16:20:00.736464392Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:00.739646 containerd[1894]: time="2024-06-25T16:20:00.739601749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:00.741418 containerd[1894]: time="2024-06-25T16:20:00.741339175Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.855031553s" Jun 25 16:20:00.742149 containerd[1894]: time="2024-06-25T16:20:00.742114967Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:20:00.769736 containerd[1894]: time="2024-06-25T16:20:00.769705379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:20:03.174092 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:20:03.174246 kernel: audit: type=1130 audit(1719332403.158:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:03.174285 kernel: audit: type=1131 audit(1719332403.158:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:03.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:03.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:03.158765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:20:03.159041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:20:03.180632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:20:03.437189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809725655.mount: Deactivated successfully. Jun 25 16:20:04.088929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:20:04.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:04.092401 kernel: audit: type=1130 audit(1719332404.088:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:04.241602 kubelet[2479]: E0625 16:20:04.241541 2479 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:20:04.245613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:20:04.245858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:20:04.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:20:04.249093 kernel: audit: type=1131 audit(1719332404.245:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:20:04.653643 containerd[1894]: time="2024-06-25T16:20:04.653584827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:04.655176 containerd[1894]: time="2024-06-25T16:20:04.655084997Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 16:20:04.657274 containerd[1894]: time="2024-06-25T16:20:04.657232598Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:04.660418 containerd[1894]: time="2024-06-25T16:20:04.660377646Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:04.663557 containerd[1894]: time="2024-06-25T16:20:04.663516542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:04.665184 containerd[1894]: time="2024-06-25T16:20:04.665136737Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 3.895238087s" Jun 25 16:20:04.665506 containerd[1894]: time="2024-06-25T16:20:04.665191652Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:20:04.700964 containerd[1894]: time="2024-06-25T16:20:04.700930628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:20:05.287847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852199602.mount: Deactivated successfully. Jun 25 16:20:05.303643 containerd[1894]: time="2024-06-25T16:20:05.303586854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:05.305493 containerd[1894]: time="2024-06-25T16:20:05.305374716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:20:05.307881 containerd[1894]: time="2024-06-25T16:20:05.307836834Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:05.311940 containerd[1894]: time="2024-06-25T16:20:05.311685769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:05.315465 containerd[1894]: time="2024-06-25T16:20:05.315420982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:05.316548 containerd[1894]: time="2024-06-25T16:20:05.316506483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 615.103868ms" Jun 25 16:20:05.316898 containerd[1894]: time="2024-06-25T16:20:05.316866229Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:20:05.348754 containerd[1894]: time="2024-06-25T16:20:05.348708265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:20:05.966667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903792479.mount: Deactivated successfully. Jun 25 16:20:08.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:08.881892 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 16:20:08.885267 kernel: audit: type=1131 audit(1719332408.881:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:09.185553 containerd[1894]: time="2024-06-25T16:20:09.185328154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:09.187890 containerd[1894]: time="2024-06-25T16:20:09.187590275Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 16:20:09.189202 containerd[1894]: time="2024-06-25T16:20:09.189163858Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:09.192195 containerd[1894]: time="2024-06-25T16:20:09.192155778Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:09.195438 containerd[1894]: time="2024-06-25T16:20:09.195293217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:09.197083 containerd[1894]: time="2024-06-25T16:20:09.197008062Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.848254409s" Jun 25 16:20:09.198414 containerd[1894]: time="2024-06-25T16:20:09.197084337Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:20:09.227513 containerd[1894]: time="2024-06-25T16:20:09.227462971Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:20:09.842561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2666406176.mount: Deactivated successfully. Jun 25 16:20:10.766323 containerd[1894]: time="2024-06-25T16:20:10.766269677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:10.768000 containerd[1894]: time="2024-06-25T16:20:10.767940481Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 16:20:10.770251 containerd[1894]: time="2024-06-25T16:20:10.770213595Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:10.772755 containerd[1894]: time="2024-06-25T16:20:10.772719331Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:10.774965 containerd[1894]: time="2024-06-25T16:20:10.774927309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:10.775855 containerd[1894]: time="2024-06-25T16:20:10.775811736Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.548302257s" Jun 25 16:20:10.775948 containerd[1894]: time="2024-06-25T16:20:10.775863974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:20:14.497167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:20:14.497443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:20:14.504122 kernel: audit: type=1130 audit(1719332414.496:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:14.504281 kernel: audit: type=1131 audit(1719332414.496:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:14.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:14.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:14.506488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:20:14.871779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:20:14.877115 kernel: audit: type=1130 audit(1719332414.871:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:14.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:14.889406 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:20:14.890756 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:20:14.891047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:20:14.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:14.894115 kernel: audit: type=1131 audit(1719332414.890:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:14.897550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:20:14.927371 systemd[1]: Reloading. Jun 25 16:20:15.215672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:20:15.329341 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 16:20:15.329457 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 16:20:15.329952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:20:15.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:20:15.337968 kernel: audit: type=1130 audit(1719332415.328:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:20:15.349654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:20:15.654345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:20:15.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:15.660087 kernel: audit: type=1130 audit(1719332415.653:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:15.759453 kubelet[2721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:20:15.759851 kubelet[2721]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:20:15.759903 kubelet[2721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:20:15.760030 kubelet[2721]: I0625 16:20:15.759997 2721 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:20:16.387845 kubelet[2721]: I0625 16:20:16.387806 2721 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:20:16.387845 kubelet[2721]: I0625 16:20:16.387841 2721 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:20:16.388154 kubelet[2721]: I0625 16:20:16.388137 2721 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:20:16.419371 kubelet[2721]: I0625 16:20:16.419341 2721 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:20:16.421789 kubelet[2721]: E0625 16:20:16.421765 2721 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:16.440662 kubelet[2721]: I0625 16:20:16.440612 2721 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:20:16.441145 kubelet[2721]: I0625 16:20:16.441124 2721 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:20:16.441351 kubelet[2721]: I0625 16:20:16.441328 2721 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:20:16.441508 kubelet[2721]: I0625 16:20:16.441359 2721 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:20:16.441508 kubelet[2721]: I0625 16:20:16.441374 2721 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:20:16.442448 kubelet[2721]: I0625 16:20:16.442422 2721 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:20:16.447076 kubelet[2721]: I0625 16:20:16.447032 2721 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:20:16.447076 kubelet[2721]: I0625 16:20:16.447083 2721 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:20:16.447290 kubelet[2721]: I0625 16:20:16.447122 2721 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:20:16.447290 kubelet[2721]: I0625 16:20:16.447144 2721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:20:16.449238 kubelet[2721]: I0625 16:20:16.449215 2721 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:20:16.454413 kubelet[2721]: W0625 16:20:16.454356 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:16.454610 kubelet[2721]: E0625 16:20:16.454598 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:16.454797 kubelet[2721]: W0625 16:20:16.454761 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-52&limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:16.454886 kubelet[2721]: E0625 16:20:16.454877 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-52&limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:16.465795 kubelet[2721]: W0625 16:20:16.465745 2721 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:20:16.466727 kubelet[2721]: I0625 16:20:16.466703 2721 server.go:1232] "Started kubelet" Jun 25 16:20:16.471202 kubelet[2721]: I0625 16:20:16.471172 2721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:20:16.482000 audit[2731]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.482000 audit[2731]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcae9bc020 a2=0 a3=7f5cc05ace90 items=0 ppid=2721 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.485947 kubelet[2721]: E0625 16:20:16.485838 2721 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-52.17dc4bb92e4f25bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-52", UID:"ip-172-31-30-52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-52"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 20, 16, 466658749, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 20, 16, 466658749, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-52"}': 'Post "https://172.31.30.52:6443/api/v1/namespaces/default/events": dial tcp 172.31.30.52:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:20:16.498374 kubelet[2721]: E0625 16:20:16.487760 2721 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:20:16.498670 kubelet[2721]: E0625 16:20:16.498630 2721 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:20:16.500493 kubelet[2721]: I0625 16:20:16.500472 2721 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:20:16.500612 kernel: audit: type=1325 audit(1719332416.482:208): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.500688 kernel: audit: type=1300 audit(1719332416.482:208): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcae9bc020 a2=0 a3=7f5cc05ace90 items=0 ppid=2721 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:20:16.502108 kubelet[2721]: I0625 16:20:16.502091 2721 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:20:16.506247 kernel: audit: type=1327 audit(1719332416.482:208): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:20:16.509984 kubelet[2721]: I0625 16:20:16.509951 2721 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:20:16.510174 kubelet[2721]: I0625 16:20:16.510157 2721 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:20:16.510555 kubelet[2721]: I0625 16:20:16.510533 2721 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:20:16.513848 kubelet[2721]: I0625 16:20:16.513819 2721 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:20:16.514101 kubelet[2721]: I0625 16:20:16.514015 2721 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:20:16.513000 audit[2732]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.517230 kubelet[2721]: W0625 16:20:16.515989 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:16.517230 kubelet[2721]: E0625 16:20:16.516044 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:16.517230 kubelet[2721]: E0625 16:20:16.516145 2721 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-52?timeout=10s\": dial tcp 172.31.30.52:6443: connect: connection refused" interval="200ms" Jun 25 16:20:16.513000 audit[2732]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe769377b0 a2=0 a3=7ff6df349e90 items=0 ppid=2721 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:20:16.518110 kernel: audit: type=1325 audit(1719332416.513:209): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.521000 audit[2734]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.521000 audit[2734]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc3d66cb00 a2=0 a3=7f056cd8de90 items=0 ppid=2721 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:20:16.526000 audit[2736]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2736 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.526000 audit[2736]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff957d11d0 a2=0 a3=7f5aa9097e90 items=0 ppid=2721 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.526000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:20:16.555000 audit[2742]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.555000 audit[2742]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcc3e30bb0 a2=0 a3=7fcf2bfd0e90 items=0 ppid=2721 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.555000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:20:16.557183 kubelet[2721]: I0625 16:20:16.557154 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:20:16.557000 audit[2744]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2744 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:16.557000 audit[2744]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff0c956a90 a2=0 a3=7fac4731ae90 items=0 ppid=2721 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:20:16.559230 kubelet[2721]: I0625 16:20:16.559200 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:20:16.559230 kubelet[2721]: I0625 16:20:16.559227 2721 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:20:16.559534 kubelet[2721]: I0625 16:20:16.559251 2721 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:20:16.559534 kubelet[2721]: E0625 16:20:16.559304 2721 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:20:16.560000 audit[2745]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2745 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.560000 audit[2745]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3ecc8350 a2=0 a3=7fdb5e9b7e90 items=0 ppid=2721 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:20:16.562000 audit[2746]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.566576 kubelet[2721]: W0625 16:20:16.566544 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:16.566666 kubelet[2721]: E0625 16:20:16.566589 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:16.567000 audit[2747]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=2747 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:16.567000 audit[2747]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd984713a0 a2=0 a3=7f35e5a7ae90 items=0 ppid=2721 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:20:16.562000 audit[2746]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8afbcf50 a2=0 a3=7fe1e233de90 items=0 ppid=2721 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:20:16.574000 audit[2751]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:16.574000 audit[2751]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fffcb89eb70 a2=0 a3=7f02ec8b5e90 items=0 ppid=2721 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:20:16.576000 audit[2750]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:16.576000 audit[2750]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4bf11dd0 a2=0 a3=7f4e8db5be90 items=0 ppid=2721 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:20:16.577000 audit[2752]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2752 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:16.577000 audit[2752]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffcd2a8c30 a2=0 a3=7f282e3eee90 items=0 ppid=2721 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:20:16.605904 kubelet[2721]: I0625 16:20:16.605882 2721 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:20:16.606080 kubelet[2721]: I0625 16:20:16.606052 2721 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:20:16.606168 kubelet[2721]: I0625 16:20:16.606159 2721 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:20:16.609379 kubelet[2721]: I0625 16:20:16.609357 2721 policy_none.go:49] "None policy: Start" Jun 25 16:20:16.610392 kubelet[2721]: I0625 16:20:16.610372 2721 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:20:16.610392 kubelet[2721]: I0625 16:20:16.610402 2721 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:20:16.612461 kubelet[2721]: I0625 16:20:16.612435 2721 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-52" Jun 25 16:20:16.612928 kubelet[2721]: E0625 16:20:16.612903 2721 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.52:6443/api/v1/nodes\": dial tcp 172.31.30.52:6443: connect: connection refused" node="ip-172-31-30-52" Jun 25 16:20:16.617637 kubelet[2721]: I0625 16:20:16.617606 2721 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:20:16.617913 kubelet[2721]: I0625 16:20:16.617896 2721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:20:16.636730 kubelet[2721]: E0625 16:20:16.636631 2721 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-52\" not found" Jun 25 16:20:16.660515 kubelet[2721]: I0625 16:20:16.660375 2721 topology_manager.go:215] "Topology Admit Handler" podUID="c5570f5d4c33a41d3e461f9594686d57" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:16.666706 kubelet[2721]: I0625 16:20:16.666191 2721 topology_manager.go:215] "Topology Admit Handler" podUID="6288b396809daafdfc592881cc0f4e59" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-52" Jun 25 16:20:16.676054 kubelet[2721]: I0625 16:20:16.676020 2721 topology_manager.go:215] "Topology Admit Handler" podUID="aaf2738207f035f94164eb0c8f4ee77d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-52" Jun 25 16:20:16.716983 kubelet[2721]: E0625 16:20:16.716939 2721 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-52?timeout=10s\": dial tcp 172.31.30.52:6443: connect: connection refused" interval="400ms" Jun 25 16:20:16.814622 kubelet[2721]: I0625 16:20:16.814596 2721 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-52" Jun 25 16:20:16.815084 kubelet[2721]: I0625 16:20:16.814597 2721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaf2738207f035f94164eb0c8f4ee77d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-52\" (UID: \"aaf2738207f035f94164eb0c8f4ee77d\") " pod="kube-system/kube-apiserver-ip-172-31-30-52" Jun 25 16:20:16.815236 kubelet[2721]: E0625 16:20:16.815100 2721 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.52:6443/api/v1/nodes\": dial tcp 172.31.30.52:6443: connect: connection refused" node="ip-172-31-30-52" Jun 25 16:20:16.815398 kubelet[2721]: I0625 16:20:16.815382 2721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:16.815530 kubelet[2721]: I0625 16:20:16.815511 2721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:16.815609 kubelet[2721]: I0625 16:20:16.815570 2721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:16.815609 kubelet[2721]: I0625 16:20:16.815603 2721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaf2738207f035f94164eb0c8f4ee77d-ca-certs\") pod \"kube-apiserver-ip-172-31-30-52\" (UID: \"aaf2738207f035f94164eb0c8f4ee77d\") " pod="kube-system/kube-apiserver-ip-172-31-30-52" Jun 25 16:20:16.815713 kubelet[2721]: I0625 16:20:16.815651 2721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaf2738207f035f94164eb0c8f4ee77d-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-52\" (UID: \"aaf2738207f035f94164eb0c8f4ee77d\") " pod="kube-system/kube-apiserver-ip-172-31-30-52" Jun 25 16:20:16.815713 kubelet[2721]: I0625 16:20:16.815683 2721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:16.815800 kubelet[2721]: I0625 16:20:16.815733 2721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:16.815800 kubelet[2721]: I0625 16:20:16.815790 2721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6288b396809daafdfc592881cc0f4e59-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-52\" (UID: \"6288b396809daafdfc592881cc0f4e59\") " pod="kube-system/kube-scheduler-ip-172-31-30-52" Jun 25 16:20:16.984119 containerd[1894]: time="2024-06-25T16:20:16.984049760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-52,Uid:c5570f5d4c33a41d3e461f9594686d57,Namespace:kube-system,Attempt:0,}" Jun 25 16:20:16.989580 containerd[1894]: time="2024-06-25T16:20:16.989391429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-52,Uid:aaf2738207f035f94164eb0c8f4ee77d,Namespace:kube-system,Attempt:0,}" Jun 25 16:20:16.993437 containerd[1894]: time="2024-06-25T16:20:16.993392917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-52,Uid:6288b396809daafdfc592881cc0f4e59,Namespace:kube-system,Attempt:0,}" Jun 25 16:20:17.124269 kubelet[2721]: E0625 16:20:17.124229 2721 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-52?timeout=10s\": dial tcp 172.31.30.52:6443: connect: connection refused" interval="800ms" Jun 25 16:20:17.220699 kubelet[2721]: I0625 16:20:17.220671 2721 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-52" Jun 25 16:20:17.221605 kubelet[2721]: E0625 16:20:17.221582 2721 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.52:6443/api/v1/nodes\": dial tcp 172.31.30.52:6443: connect: connection refused" node="ip-172-31-30-52" Jun 25 16:20:17.370440 kubelet[2721]: W0625 16:20:17.370193 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:17.370440 kubelet[2721]: E0625 16:20:17.370255 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:17.391000 kubelet[2721]: W0625 16:20:17.390884 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:17.391000 kubelet[2721]: E0625 16:20:17.391003 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:17.507199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946652708.mount: Deactivated successfully. Jun 25 16:20:17.525015 containerd[1894]: time="2024-06-25T16:20:17.524965301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.527051 containerd[1894]: time="2024-06-25T16:20:17.527005926Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.529043 containerd[1894]: time="2024-06-25T16:20:17.528986442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:20:17.530566 containerd[1894]: time="2024-06-25T16:20:17.530527010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:20:17.532618 containerd[1894]: time="2024-06-25T16:20:17.532572481Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.534437 containerd[1894]: time="2024-06-25T16:20:17.534351267Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.536317 containerd[1894]: time="2024-06-25T16:20:17.536279620Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.537727 containerd[1894]: time="2024-06-25T16:20:17.537675084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:20:17.540361 containerd[1894]: time="2024-06-25T16:20:17.540319154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.542093 containerd[1894]: time="2024-06-25T16:20:17.542032118Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.543737 containerd[1894]: time="2024-06-25T16:20:17.543639891Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.545544 containerd[1894]: time="2024-06-25T16:20:17.545507759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.547996 containerd[1894]: time="2024-06-25T16:20:17.547950091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.452036ms" Jun 25 16:20:17.550326 containerd[1894]: time="2024-06-25T16:20:17.550288361Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.552114 containerd[1894]: time="2024-06-25T16:20:17.552074745Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.554248 containerd[1894]: time="2024-06-25T16:20:17.554204758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.019089ms" Jun 25 16:20:17.558985 containerd[1894]: time="2024-06-25T16:20:17.558936124Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:20:17.559921 containerd[1894]: time="2024-06-25T16:20:17.559878131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.236636ms" Jun 25 16:20:17.735249 kubelet[2721]: W0625 16:20:17.734682 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-52&limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:17.735249 kubelet[2721]: E0625 16:20:17.734758 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-52&limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:17.779311 kubelet[2721]: W0625 16:20:17.767671 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:17.779311 kubelet[2721]: E0625 16:20:17.767743 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:17.829085 containerd[1894]: time="2024-06-25T16:20:17.828930160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:17.829272 containerd[1894]: time="2024-06-25T16:20:17.829101446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:17.829272 containerd[1894]: time="2024-06-25T16:20:17.829178382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:17.829272 containerd[1894]: time="2024-06-25T16:20:17.829234298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:17.853053 containerd[1894]: time="2024-06-25T16:20:17.852930195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:17.853053 containerd[1894]: time="2024-06-25T16:20:17.853021563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:17.854665 containerd[1894]: time="2024-06-25T16:20:17.854485276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:17.855515 containerd[1894]: time="2024-06-25T16:20:17.854649169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:17.918979 containerd[1894]: time="2024-06-25T16:20:17.918695592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:17.919765 containerd[1894]: time="2024-06-25T16:20:17.919038878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:17.919765 containerd[1894]: time="2024-06-25T16:20:17.919145738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:17.919765 containerd[1894]: time="2024-06-25T16:20:17.919264149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:17.934497 kubelet[2721]: E0625 16:20:17.934390 2721 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-52?timeout=10s\": dial tcp 172.31.30.52:6443: connect: connection refused" interval="1.6s" Jun 25 16:20:18.029027 kubelet[2721]: I0625 16:20:18.027289 2721 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-52" Jun 25 16:20:18.029027 kubelet[2721]: E0625 16:20:18.027893 2721 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.52:6443/api/v1/nodes\": dial tcp 172.31.30.52:6443: connect: connection refused" node="ip-172-31-30-52" Jun 25 16:20:18.055260 containerd[1894]: time="2024-06-25T16:20:18.055215124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-52,Uid:6288b396809daafdfc592881cc0f4e59,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d04bb501b6730b506bec3cdeb0c0329a7b0d427e011c695f9b54aee9be6e08\"" Jun 25 16:20:18.060554 containerd[1894]: time="2024-06-25T16:20:18.060511768Z" level=info msg="CreateContainer within sandbox \"96d04bb501b6730b506bec3cdeb0c0329a7b0d427e011c695f9b54aee9be6e08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:20:18.075014 containerd[1894]: time="2024-06-25T16:20:18.074967549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-52,Uid:aaf2738207f035f94164eb0c8f4ee77d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a96bd9c2659a381ab832eb1d5d53f60ce6087134e3b60b3365da8d986e659684\"" Jun 25 16:20:18.079452 containerd[1894]: time="2024-06-25T16:20:18.079407246Z" level=info msg="CreateContainer within sandbox \"a96bd9c2659a381ab832eb1d5d53f60ce6087134e3b60b3365da8d986e659684\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:20:18.094590 containerd[1894]: time="2024-06-25T16:20:18.094535438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-52,Uid:c5570f5d4c33a41d3e461f9594686d57,Namespace:kube-system,Attempt:0,} returns sandbox id \"75620bea4148b1d6e9f3dd7d0209be633feb7b422974ff0ab092739ad23952f3\"" Jun 25 16:20:18.099322 containerd[1894]: time="2024-06-25T16:20:18.099280485Z" level=info msg="CreateContainer within sandbox \"75620bea4148b1d6e9f3dd7d0209be633feb7b422974ff0ab092739ad23952f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:20:18.105132 containerd[1894]: time="2024-06-25T16:20:18.105073327Z" level=info msg="CreateContainer within sandbox \"96d04bb501b6730b506bec3cdeb0c0329a7b0d427e011c695f9b54aee9be6e08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a64db02d87d3eb694df6dadd1d54d1c04b4b34dc3ae3c1e9203a43017f602e64\"" Jun 25 16:20:18.109179 containerd[1894]: time="2024-06-25T16:20:18.109118401Z" level=info msg="StartContainer for \"a64db02d87d3eb694df6dadd1d54d1c04b4b34dc3ae3c1e9203a43017f602e64\"" Jun 25 16:20:18.130117 containerd[1894]: time="2024-06-25T16:20:18.130033067Z" level=info msg="CreateContainer within sandbox \"a96bd9c2659a381ab832eb1d5d53f60ce6087134e3b60b3365da8d986e659684\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de18bb11ee7884c5d2ffdc172bb665214298d52189169f7c6d90b4dce8fb253f\"" Jun 25 16:20:18.130868 containerd[1894]: time="2024-06-25T16:20:18.130831441Z" level=info msg="StartContainer for \"de18bb11ee7884c5d2ffdc172bb665214298d52189169f7c6d90b4dce8fb253f\"" Jun 25 16:20:18.141682 containerd[1894]: time="2024-06-25T16:20:18.141624798Z" level=info msg="CreateContainer within sandbox \"75620bea4148b1d6e9f3dd7d0209be633feb7b422974ff0ab092739ad23952f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e2364bb4fdd4cee327da357a61d0f69c0d5a63c11b5fc8bc74ac7466aab61a62\"" Jun 25 16:20:18.142235 containerd[1894]: time="2024-06-25T16:20:18.142196439Z" level=info msg="StartContainer for \"e2364bb4fdd4cee327da357a61d0f69c0d5a63c11b5fc8bc74ac7466aab61a62\"" Jun 25 16:20:18.261152 containerd[1894]: time="2024-06-25T16:20:18.260085972Z" level=info msg="StartContainer for \"a64db02d87d3eb694df6dadd1d54d1c04b4b34dc3ae3c1e9203a43017f602e64\" returns successfully" Jun 25 16:20:18.288608 kubelet[2721]: E0625 16:20:18.288234 2721 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-52.17dc4bb92e4f25bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-52", UID:"ip-172-31-30-52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-52"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 20, 16, 466658749, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 20, 16, 466658749, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-52"}': 'Post "https://172.31.30.52:6443/api/v1/namespaces/default/events": dial tcp 172.31.30.52:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:20:18.356772 containerd[1894]: time="2024-06-25T16:20:18.356626092Z" level=info msg="StartContainer for \"de18bb11ee7884c5d2ffdc172bb665214298d52189169f7c6d90b4dce8fb253f\" returns successfully" Jun 25 16:20:18.408964 containerd[1894]: time="2024-06-25T16:20:18.408911052Z" level=info msg="StartContainer for \"e2364bb4fdd4cee327da357a61d0f69c0d5a63c11b5fc8bc74ac7466aab61a62\" returns successfully" Jun 25 16:20:18.567503 kubelet[2721]: E0625 16:20:18.567383 2721 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:19.021697 kubelet[2721]: W0625 16:20:19.021625 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:19.021697 kubelet[2721]: E0625 16:20:19.021705 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:19.202386 kubelet[2721]: W0625 16:20:19.202315 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:19.202386 kubelet[2721]: E0625 16:20:19.202393 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:19.535881 kubelet[2721]: E0625 16:20:19.535844 2721 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-52?timeout=10s\": dial tcp 172.31.30.52:6443: connect: connection refused" interval="3.2s" Jun 25 16:20:19.609838 kubelet[2721]: W0625 16:20:19.609763 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-52&limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:19.610046 kubelet[2721]: E0625 16:20:19.610033 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-52&limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:19.630841 kubelet[2721]: I0625 16:20:19.630818 2721 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-52" Jun 25 16:20:19.632043 kubelet[2721]: E0625 16:20:19.632023 2721 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.52:6443/api/v1/nodes\": dial tcp 172.31.30.52:6443: connect: connection refused" node="ip-172-31-30-52" Jun 25 16:20:19.850045 kubelet[2721]: W0625 16:20:19.849916 2721 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:19.850338 kubelet[2721]: E0625 16:20:19.850320 2721 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.52:6443: connect: connection refused Jun 25 16:20:22.741307 kubelet[2721]: E0625 16:20:22.741273 2721 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-52\" not found" node="ip-172-31-30-52" Jun 25 16:20:22.834426 kubelet[2721]: I0625 16:20:22.834397 2721 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-52" Jun 25 16:20:22.867106 kubelet[2721]: I0625 16:20:22.867079 2721 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-52" Jun 25 16:20:22.889318 kubelet[2721]: E0625 16:20:22.889286 2721 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-52\" not found" Jun 25 16:20:22.992462 kubelet[2721]: E0625 16:20:22.991750 2721 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-52\" not found" Jun 25 16:20:23.093082 kubelet[2721]: E0625 16:20:23.092415 2721 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-52\" not found" Jun 25 16:20:23.193072 kubelet[2721]: E0625 16:20:23.193009 2721 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-52\" not found" Jun 25 16:20:23.294199 kubelet[2721]: E0625 16:20:23.294027 2721 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-52\" not found" Jun 25 16:20:23.394703 kubelet[2721]: E0625 16:20:23.394658 2721 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-52\" not found" Jun 25 16:20:23.494895 kubelet[2721]: E0625 16:20:23.494855 2721 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-52\" not found" Jun 25 16:20:23.874883 update_engine[1886]: I0625 16:20:23.874104 1886 update_attempter.cc:509] Updating boot flags... Jun 25 16:20:23.977099 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3015) Jun 25 16:20:24.256110 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3015) Jun 25 16:20:24.456865 kubelet[2721]: I0625 16:20:24.456033 2721 apiserver.go:52] "Watching apiserver" Jun 25 16:20:24.457486 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3015) Jun 25 16:20:24.517270 kubelet[2721]: I0625 16:20:24.514632 2721 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:20:25.377093 systemd[1]: Reloading. Jun 25 16:20:25.706964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:20:25.854740 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:20:25.856420 kubelet[2721]: I0625 16:20:25.856281 2721 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:20:25.891129 kernel: kauditd_printk_skb: 32 callbacks suppressed Jun 25 16:20:25.891360 kernel: audit: type=1131 audit(1719332425.875:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:25.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:25.876148 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:20:25.876814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:20:25.898790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:20:26.212508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:20:26.219494 kernel: audit: type=1130 audit(1719332426.211:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:26.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:26.414474 kubelet[3353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:20:26.415097 kubelet[3353]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:20:26.415184 kubelet[3353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:20:26.415376 kubelet[3353]: I0625 16:20:26.415346 3353 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:20:26.424438 kubelet[3353]: I0625 16:20:26.424410 3353 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:20:26.424762 kubelet[3353]: I0625 16:20:26.424748 3353 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:20:26.425168 kubelet[3353]: I0625 16:20:26.425155 3353 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:20:26.428788 kubelet[3353]: I0625 16:20:26.428770 3353 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:20:26.433756 kubelet[3353]: I0625 16:20:26.433736 3353 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:20:26.465586 kubelet[3353]: I0625 16:20:26.464110 3353 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:20:26.466824 kubelet[3353]: I0625 16:20:26.466799 3353 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:20:26.467556 kubelet[3353]: I0625 16:20:26.467534 3353 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:20:26.467811 kubelet[3353]: I0625 16:20:26.467798 3353 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:20:26.468243 kubelet[3353]: I0625 16:20:26.468145 3353 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:20:26.468566 kubelet[3353]: I0625 16:20:26.468552 3353 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:20:26.472751 kubelet[3353]: I0625 16:20:26.472728 3353 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:20:26.472967 kubelet[3353]: I0625 16:20:26.472954 3353 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:20:26.473260 kubelet[3353]: I0625 16:20:26.473230 3353 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:20:26.473421 kubelet[3353]: I0625 16:20:26.473393 3353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:20:26.477281 kubelet[3353]: I0625 16:20:26.477256 3353 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:20:26.478284 kubelet[3353]: I0625 16:20:26.478266 3353 server.go:1232] "Started kubelet" Jun 25 16:20:26.497236 kubelet[3353]: I0625 16:20:26.494451 3353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:20:26.497236 kubelet[3353]: I0625 16:20:26.496230 3353 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:20:26.502456 kubelet[3353]: I0625 16:20:26.502405 3353 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:20:26.508199 kubelet[3353]: I0625 16:20:26.508166 3353 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:20:26.508725 kubelet[3353]: I0625 16:20:26.508704 3353 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:20:26.532927 kubelet[3353]: I0625 16:20:26.532882 3353 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:20:26.550842 kubelet[3353]: I0625 16:20:26.533028 3353 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:20:26.551213 kubelet[3353]: I0625 16:20:26.551197 3353 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:20:26.594900 kubelet[3353]: E0625 16:20:26.593564 3353 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:20:26.594900 kubelet[3353]: E0625 16:20:26.593607 3353 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:20:26.651028 kubelet[3353]: I0625 16:20:26.649425 3353 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-52" Jun 25 16:20:26.701410 kubelet[3353]: I0625 16:20:26.700233 3353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:20:26.711330 kubelet[3353]: I0625 16:20:26.711306 3353 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-30-52" Jun 25 16:20:26.711564 kubelet[3353]: I0625 16:20:26.711542 3353 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-52" Jun 25 16:20:26.713116 kubelet[3353]: I0625 16:20:26.713086 3353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:20:26.713315 kubelet[3353]: I0625 16:20:26.713296 3353 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:20:26.713450 kubelet[3353]: I0625 16:20:26.713439 3353 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:20:26.721142 kubelet[3353]: E0625 16:20:26.719148 3353 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:20:26.822029 kubelet[3353]: E0625 16:20:26.821702 3353 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:20:26.912147 kubelet[3353]: I0625 16:20:26.910095 3353 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:20:26.912147 kubelet[3353]: I0625 16:20:26.910857 3353 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:20:26.912147 kubelet[3353]: I0625 16:20:26.910889 3353 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:20:26.912147 kubelet[3353]: I0625 16:20:26.911108 3353 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:20:26.912147 kubelet[3353]: I0625 16:20:26.911137 3353 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:20:26.912147 kubelet[3353]: I0625 16:20:26.911147 3353 policy_none.go:49] "None policy: Start" Jun 25 16:20:26.915918 kubelet[3353]: I0625 16:20:26.915885 3353 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:20:26.915918 kubelet[3353]: I0625 16:20:26.915924 3353 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:20:26.916213 kubelet[3353]: I0625 16:20:26.916190 3353 state_mem.go:75] "Updated machine memory state" Jun 25 16:20:26.919959 kubelet[3353]: I0625 16:20:26.919884 3353 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:20:26.936587 kubelet[3353]: I0625 16:20:26.935458 3353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:20:27.023715 kubelet[3353]: I0625 16:20:27.023368 3353 topology_manager.go:215] "Topology Admit Handler" podUID="aaf2738207f035f94164eb0c8f4ee77d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-52" Jun 25 16:20:27.023950 kubelet[3353]: I0625 16:20:27.023732 3353 topology_manager.go:215] "Topology Admit Handler" podUID="c5570f5d4c33a41d3e461f9594686d57" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:27.023950 kubelet[3353]: I0625 16:20:27.023796 3353 topology_manager.go:215] "Topology Admit Handler" podUID="6288b396809daafdfc592881cc0f4e59" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-52" Jun 25 16:20:27.035897 kubelet[3353]: E0625 16:20:27.035867 3353 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-52\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:27.065988 kubelet[3353]: I0625 16:20:27.064589 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaf2738207f035f94164eb0c8f4ee77d-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-52\" (UID: \"aaf2738207f035f94164eb0c8f4ee77d\") " pod="kube-system/kube-apiserver-ip-172-31-30-52" Jun 25 16:20:27.065988 kubelet[3353]: I0625 16:20:27.064705 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaf2738207f035f94164eb0c8f4ee77d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-52\" (UID: \"aaf2738207f035f94164eb0c8f4ee77d\") " pod="kube-system/kube-apiserver-ip-172-31-30-52" Jun 25 16:20:27.065988 kubelet[3353]: I0625 16:20:27.064795 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:27.065988 kubelet[3353]: I0625 16:20:27.064853 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaf2738207f035f94164eb0c8f4ee77d-ca-certs\") pod \"kube-apiserver-ip-172-31-30-52\" (UID: \"aaf2738207f035f94164eb0c8f4ee77d\") " pod="kube-system/kube-apiserver-ip-172-31-30-52" Jun 25 16:20:27.065988 kubelet[3353]: I0625 16:20:27.064946 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:27.070650 kubelet[3353]: I0625 16:20:27.064997 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:27.070650 kubelet[3353]: I0625 16:20:27.065037 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:27.070650 kubelet[3353]: I0625 16:20:27.065102 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6288b396809daafdfc592881cc0f4e59-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-52\" (UID: \"6288b396809daafdfc592881cc0f4e59\") " pod="kube-system/kube-scheduler-ip-172-31-30-52" Jun 25 16:20:27.070650 kubelet[3353]: I0625 16:20:27.065222 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5570f5d4c33a41d3e461f9594686d57-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-52\" (UID: \"c5570f5d4c33a41d3e461f9594686d57\") " pod="kube-system/kube-controller-manager-ip-172-31-30-52" Jun 25 16:20:27.475867 kubelet[3353]: I0625 16:20:27.475817 3353 apiserver.go:52] "Watching apiserver" Jun 25 16:20:27.551540 kubelet[3353]: I0625 16:20:27.551486 3353 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:20:27.848178 kubelet[3353]: I0625 16:20:27.848075 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-52" podStartSLOduration=0.847988961 podCreationTimestamp="2024-06-25 16:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:20:27.832852721 +0000 UTC m=+1.594056492" watchObservedRunningTime="2024-06-25 16:20:27.847988961 +0000 UTC m=+1.609192725" Jun 25 16:20:27.848573 kubelet[3353]: I0625 16:20:27.848556 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-52" podStartSLOduration=0.848519822 podCreationTimestamp="2024-06-25 16:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:20:27.847255938 +0000 UTC m=+1.608459708" watchObservedRunningTime="2024-06-25 16:20:27.848519822 +0000 UTC m=+1.609723590" Jun 25 16:20:27.866656 kubelet[3353]: I0625 16:20:27.866625 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-52" podStartSLOduration=3.866578739 podCreationTimestamp="2024-06-25 16:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:20:27.864588354 +0000 UTC m=+1.625792126" watchObservedRunningTime="2024-06-25 16:20:27.866578739 +0000 UTC m=+1.627782511" Jun 25 16:20:33.050346 sudo[2227]: pam_unix(sudo:session): session closed for user root Jun 25 16:20:33.049000 audit[2227]: USER_END pid=2227 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.049000 audit[2227]: CRED_DISP pid=2227 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.058252 kernel: audit: type=1106 audit(1719332433.049:222): pid=2227 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.058355 kernel: audit: type=1104 audit(1719332433.049:223): pid=2227 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.080429 sshd[2223]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:33.081000 audit[2223]: USER_END pid=2223 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:20:33.089190 kernel: audit: type=1106 audit(1719332433.081:224): pid=2223 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:20:33.089210 systemd-logind[1885]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:20:33.082000 audit[2223]: CRED_DISP pid=2223 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:20:33.093494 kernel: audit: type=1104 audit(1719332433.082:225): pid=2223 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:20:33.094825 systemd[1]: sshd@6-172.31.30.52:22-139.178.89.65:36782.service: Deactivated successfully. Jun 25 16:20:33.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.52:22-139.178.89.65:36782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.096377 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:20:33.098406 systemd-logind[1885]: Removed session 7. Jun 25 16:20:33.099107 kernel: audit: type=1131 audit(1719332433.094:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.52:22-139.178.89.65:36782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:37.494358 kubelet[3353]: I0625 16:20:37.494331 3353 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:20:37.495966 containerd[1894]: time="2024-06-25T16:20:37.495901787Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:20:37.496389 kubelet[3353]: I0625 16:20:37.496295 3353 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:20:37.848785 kubelet[3353]: I0625 16:20:37.848674 3353 topology_manager.go:215] "Topology Admit Handler" podUID="a807b7a7-4766-4965-9d0b-c7951f041402" podNamespace="kube-system" podName="kube-proxy-775gx" Jun 25 16:20:37.869088 kubelet[3353]: I0625 16:20:37.868479 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2gfd\" (UniqueName: \"kubernetes.io/projected/a807b7a7-4766-4965-9d0b-c7951f041402-kube-api-access-j2gfd\") pod \"kube-proxy-775gx\" (UID: \"a807b7a7-4766-4965-9d0b-c7951f041402\") " pod="kube-system/kube-proxy-775gx" Jun 25 16:20:37.869088 kubelet[3353]: I0625 16:20:37.868541 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a807b7a7-4766-4965-9d0b-c7951f041402-xtables-lock\") pod \"kube-proxy-775gx\" (UID: \"a807b7a7-4766-4965-9d0b-c7951f041402\") " pod="kube-system/kube-proxy-775gx" Jun 25 16:20:37.869088 kubelet[3353]: I0625 16:20:37.868699 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a807b7a7-4766-4965-9d0b-c7951f041402-kube-proxy\") pod \"kube-proxy-775gx\" (UID: \"a807b7a7-4766-4965-9d0b-c7951f041402\") " pod="kube-system/kube-proxy-775gx" Jun 25 16:20:37.869088 kubelet[3353]: I0625 16:20:37.868738 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a807b7a7-4766-4965-9d0b-c7951f041402-lib-modules\") pod \"kube-proxy-775gx\" (UID: \"a807b7a7-4766-4965-9d0b-c7951f041402\") " pod="kube-system/kube-proxy-775gx" Jun 25 16:20:38.160707 containerd[1894]: time="2024-06-25T16:20:38.160329398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-775gx,Uid:a807b7a7-4766-4965-9d0b-c7951f041402,Namespace:kube-system,Attempt:0,}" Jun 25 16:20:38.245136 containerd[1894]: time="2024-06-25T16:20:38.244945033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:38.245136 containerd[1894]: time="2024-06-25T16:20:38.245046974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:38.245555 containerd[1894]: time="2024-06-25T16:20:38.245121682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:38.245555 containerd[1894]: time="2024-06-25T16:20:38.245521134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:38.351745 containerd[1894]: time="2024-06-25T16:20:38.351687154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-775gx,Uid:a807b7a7-4766-4965-9d0b-c7951f041402,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7102075af2d0c6fac6f4be9ddd0873bdb15b8c399a4844525cdea215430bbb1\"" Jun 25 16:20:38.367992 containerd[1894]: time="2024-06-25T16:20:38.367941443Z" level=info msg="CreateContainer within sandbox \"e7102075af2d0c6fac6f4be9ddd0873bdb15b8c399a4844525cdea215430bbb1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:20:38.402134 containerd[1894]: time="2024-06-25T16:20:38.402076337Z" level=info msg="CreateContainer within sandbox \"e7102075af2d0c6fac6f4be9ddd0873bdb15b8c399a4844525cdea215430bbb1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0affd5471eb4205e12fd180b57bbeca9c031e9fca9aef9e9e36c7d677f0b233f\"" Jun 25 16:20:38.403604 containerd[1894]: time="2024-06-25T16:20:38.403563084Z" level=info msg="StartContainer for \"0affd5471eb4205e12fd180b57bbeca9c031e9fca9aef9e9e36c7d677f0b233f\"" Jun 25 16:20:38.491085 kubelet[3353]: I0625 16:20:38.489942 3353 topology_manager.go:215] "Topology Admit Handler" podUID="d4a0ac46-ae4b-447c-bbe2-805a2f39fcb4" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-cvw9j" Jun 25 16:20:38.580926 kubelet[3353]: I0625 16:20:38.580884 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjmhk\" (UniqueName: \"kubernetes.io/projected/d4a0ac46-ae4b-447c-bbe2-805a2f39fcb4-kube-api-access-wjmhk\") pod \"tigera-operator-76c4974c85-cvw9j\" (UID: \"d4a0ac46-ae4b-447c-bbe2-805a2f39fcb4\") " pod="tigera-operator/tigera-operator-76c4974c85-cvw9j" Jun 25 16:20:38.581541 kubelet[3353]: I0625 16:20:38.581521 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d4a0ac46-ae4b-447c-bbe2-805a2f39fcb4-var-lib-calico\") pod \"tigera-operator-76c4974c85-cvw9j\" (UID: \"d4a0ac46-ae4b-447c-bbe2-805a2f39fcb4\") " pod="tigera-operator/tigera-operator-76c4974c85-cvw9j" Jun 25 16:20:38.680110 containerd[1894]: time="2024-06-25T16:20:38.680023000Z" level=info msg="StartContainer for \"0affd5471eb4205e12fd180b57bbeca9c031e9fca9aef9e9e36c7d677f0b233f\" returns successfully" Jun 25 16:20:38.795329 containerd[1894]: time="2024-06-25T16:20:38.794785847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-cvw9j,Uid:d4a0ac46-ae4b-447c-bbe2-805a2f39fcb4,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:20:38.833940 containerd[1894]: time="2024-06-25T16:20:38.833816210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:38.833940 containerd[1894]: time="2024-06-25T16:20:38.833905988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:38.834987 containerd[1894]: time="2024-06-25T16:20:38.834209684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:38.834987 containerd[1894]: time="2024-06-25T16:20:38.834798268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:38.879739 kubelet[3353]: I0625 16:20:38.879507 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-775gx" podStartSLOduration=1.87945656 podCreationTimestamp="2024-06-25 16:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:20:38.877854941 +0000 UTC m=+12.639058713" watchObservedRunningTime="2024-06-25 16:20:38.87945656 +0000 UTC m=+12.640660329" Jun 25 16:20:38.965372 containerd[1894]: time="2024-06-25T16:20:38.965209572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-cvw9j,Uid:d4a0ac46-ae4b-447c-bbe2-805a2f39fcb4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"64627de833179c540adf0bc9b77acd715669f3d8b352ee7dd6a2d38d3221d8f0\"" Jun 25 16:20:38.968215 containerd[1894]: time="2024-06-25T16:20:38.967759093Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:20:39.031432 systemd[1]: run-containerd-runc-k8s.io-e7102075af2d0c6fac6f4be9ddd0873bdb15b8c399a4844525cdea215430bbb1-runc.anh1ks.mount: Deactivated successfully. Jun 25 16:20:39.336000 audit[3576]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.336000 audit[3576]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef9253ef0 a2=0 a3=7ffef9253edc items=0 ppid=3494 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.342138 kernel: audit: type=1325 audit(1719332439.336:227): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.342266 kernel: audit: type=1300 audit(1719332439.336:227): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef9253ef0 a2=0 a3=7ffef9253edc items=0 ppid=3494 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.344477 kernel: audit: type=1327 audit(1719332439.336:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:20:39.336000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:20:39.341000 audit[3577]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=3577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.350638 kernel: audit: type=1325 audit(1719332439.341:228): table=nat:39 family=2 entries=1 op=nft_register_chain pid=3577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.354716 kernel: audit: type=1300 audit(1719332439.341:228): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0c8f29f0 a2=0 a3=7ffd0c8f29dc items=0 ppid=3494 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.341000 audit[3577]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0c8f29f0 a2=0 a3=7ffd0c8f29dc items=0 ppid=3494 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:20:39.361532 kernel: audit: type=1327 audit(1719332439.341:228): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:20:39.365000 audit[3578]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=3578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.371106 kernel: audit: type=1325 audit(1719332439.365:229): table=filter:40 family=2 entries=1 op=nft_register_chain pid=3578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.365000 audit[3578]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4b5d8e40 a2=0 a3=7ffe4b5d8e2c items=0 ppid=3494 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.380177 kernel: audit: type=1300 audit(1719332439.365:229): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4b5d8e40 a2=0 a3=7ffe4b5d8e2c items=0 ppid=3494 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.380310 kernel: audit: type=1327 audit(1719332439.365:229): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:20:39.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:20:39.388608 kernel: audit: type=1325 audit(1719332439.365:230): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.365000 audit[3579]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.365000 audit[3579]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef9695f50 a2=0 a3=7ffef9695f3c items=0 ppid=3494 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:20:39.373000 audit[3581]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=3581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.373000 audit[3581]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc12696380 a2=0 a3=7ffc1269636c items=0 ppid=3494 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:20:39.387000 audit[3582]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=3582 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.387000 audit[3582]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6b47e510 a2=0 a3=7ffe6b47e4fc items=0 ppid=3494 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.387000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:20:39.482000 audit[3583]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.482000 audit[3583]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe7c6091d0 a2=0 a3=7ffe7c6091bc items=0 ppid=3494 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:20:39.491000 audit[3585]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.491000 audit[3585]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe06fc1790 a2=0 a3=7ffe06fc177c items=0 ppid=3494 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:20:39.501000 audit[3588]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.501000 audit[3588]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd9b4e60a0 a2=0 a3=7ffd9b4e608c items=0 ppid=3494 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:20:39.503000 audit[3589]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.503000 audit[3589]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc6936100 a2=0 a3=7ffdc69360ec items=0 ppid=3494 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.503000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:20:39.507000 audit[3591]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.507000 audit[3591]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff155cd1b0 a2=0 a3=7fff155cd19c items=0 ppid=3494 pid=3591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:20:39.513000 audit[3592]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.513000 audit[3592]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0c30baf0 a2=0 a3=7ffc0c30badc items=0 ppid=3494 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:20:39.521000 audit[3594]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.521000 audit[3594]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc426f66f0 a2=0 a3=7ffc426f66dc items=0 ppid=3494 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:20:39.528000 audit[3597]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.528000 audit[3597]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd60e37350 a2=0 a3=7ffd60e3733c items=0 ppid=3494 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.528000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:20:39.531000 audit[3598]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.531000 audit[3598]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefeb48520 a2=0 a3=7ffefeb4850c items=0 ppid=3494 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.531000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:20:39.536000 audit[3600]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.536000 audit[3600]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca178eed0 a2=0 a3=7ffca178eebc items=0 ppid=3494 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.536000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:20:39.537000 audit[3601]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.537000 audit[3601]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff659690d0 a2=0 a3=7fff659690bc items=0 ppid=3494 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:20:39.542000 audit[3603]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.542000 audit[3603]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec8b58a20 a2=0 a3=7ffec8b58a0c items=0 ppid=3494 pid=3603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:20:39.549000 audit[3606]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.549000 audit[3606]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0506df20 a2=0 a3=7ffe0506df0c items=0 ppid=3494 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:20:39.563000 audit[3609]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.563000 audit[3609]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1831c7c0 a2=0 a3=7ffe1831c7ac items=0 ppid=3494 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:20:39.581000 audit[3610]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.581000 audit[3610]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc9968d30 a2=0 a3=7ffcc9968d1c items=0 ppid=3494 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:20:39.596000 audit[3612]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.596000 audit[3612]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc5430e1c0 a2=0 a3=7ffc5430e1ac items=0 ppid=3494 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:20:39.607000 audit[3615]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.607000 audit[3615]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc1711ee0 a2=0 a3=7ffcc1711ecc items=0 ppid=3494 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:20:39.610000 audit[3616]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.610000 audit[3616]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6e63d7f0 a2=0 a3=7fff6e63d7dc items=0 ppid=3494 pid=3616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:20:39.620000 audit[3618]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:20:39.620000 audit[3618]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe74defc20 a2=0 a3=7ffe74defc0c items=0 ppid=3494 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:20:39.672000 audit[3624]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:39.672000 audit[3624]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd10871c90 a2=0 a3=7ffd10871c7c items=0 ppid=3494 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:39.680000 audit[3624]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:39.680000 audit[3624]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd10871c90 a2=0 a3=7ffd10871c7c items=0 ppid=3494 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:39.683000 audit[3629]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.683000 audit[3629]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcd19a8810 a2=0 a3=7ffcd19a87fc items=0 ppid=3494 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.683000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:20:39.689000 audit[3631]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.689000 audit[3631]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd1f5b0950 a2=0 a3=7ffd1f5b093c items=0 ppid=3494 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:20:39.698000 audit[3634]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3634 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.698000 audit[3634]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffff5dba280 a2=0 a3=7ffff5dba26c items=0 ppid=3494 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:20:39.706000 audit[3635]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.706000 audit[3635]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeae4bdd10 a2=0 a3=7ffeae4bdcfc items=0 ppid=3494 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:20:39.712000 audit[3637]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.712000 audit[3637]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffbb5d1d0 a2=0 a3=7ffffbb5d1bc items=0 ppid=3494 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.712000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:20:39.715000 audit[3638]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.715000 audit[3638]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca4337390 a2=0 a3=7ffca433737c items=0 ppid=3494 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:20:39.720000 audit[3640]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3640 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.720000 audit[3640]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc5c0abe80 a2=0 a3=7ffc5c0abe6c items=0 ppid=3494 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:20:39.736000 audit[3643]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3643 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.736000 audit[3643]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff11a11930 a2=0 a3=7fff11a1191c items=0 ppid=3494 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:20:39.738000 audit[3644]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3644 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.738000 audit[3644]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2cf608b0 a2=0 a3=7ffd2cf6089c items=0 ppid=3494 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.738000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:20:39.742000 audit[3646]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3646 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.742000 audit[3646]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe687a9ea0 a2=0 a3=7ffe687a9e8c items=0 ppid=3494 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.742000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:20:39.744000 audit[3647]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3647 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.744000 audit[3647]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc90bb1400 a2=0 a3=7ffc90bb13ec items=0 ppid=3494 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:20:39.751000 audit[3649]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.751000 audit[3649]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc901a6f40 a2=0 a3=7ffc901a6f2c items=0 ppid=3494 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:20:39.761000 audit[3652]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3652 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.761000 audit[3652]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe58d97790 a2=0 a3=7ffe58d9777c items=0 ppid=3494 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:20:39.770000 audit[3655]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3655 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.770000 audit[3655]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec863ad20 a2=0 a3=7ffec863ad0c items=0 ppid=3494 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:20:39.772000 audit[3656]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3656 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.772000 audit[3656]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdbebfa870 a2=0 a3=7ffdbebfa85c items=0 ppid=3494 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:20:39.779000 audit[3658]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3658 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.779000 audit[3658]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffce7a04190 a2=0 a3=7ffce7a0417c items=0 ppid=3494 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:20:39.785000 audit[3661]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3661 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.785000 audit[3661]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc0f2eac20 a2=0 a3=7ffc0f2eac0c items=0 ppid=3494 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:20:39.787000 audit[3662]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.787000 audit[3662]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3c8a1e70 a2=0 a3=7ffd3c8a1e5c items=0 ppid=3494 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:20:39.790000 audit[3664]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3664 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.790000 audit[3664]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd2040fc70 a2=0 a3=7ffd2040fc5c items=0 ppid=3494 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:20:39.793000 audit[3665]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3665 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.793000 audit[3665]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3db75900 a2=0 a3=7ffe3db758ec items=0 ppid=3494 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:20:39.799000 audit[3667]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3667 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.799000 audit[3667]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc5ac5e400 a2=0 a3=7ffc5ac5e3ec items=0 ppid=3494 pid=3667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:20:39.804000 audit[3670]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3670 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:20:39.804000 audit[3670]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffa0dddc90 a2=0 a3=7fffa0dddc7c items=0 ppid=3494 pid=3670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:20:39.809000 audit[3672]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3672 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:20:39.809000 audit[3672]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffdced73d40 a2=0 a3=7ffdced73d2c items=0 ppid=3494 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.809000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:39.810000 audit[3672]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3672 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:20:39.810000 audit[3672]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdced73d40 a2=0 a3=7ffdced73d2c items=0 ppid=3494 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.810000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:40.417714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732464714.mount: Deactivated successfully. Jun 25 16:20:41.326784 containerd[1894]: time="2024-06-25T16:20:41.326713041Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:41.328676 containerd[1894]: time="2024-06-25T16:20:41.328618337Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076048" Jun 25 16:20:41.337868 containerd[1894]: time="2024-06-25T16:20:41.337817813Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:41.341991 containerd[1894]: time="2024-06-25T16:20:41.341948377Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:41.344391 containerd[1894]: time="2024-06-25T16:20:41.344353774Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:41.345144 containerd[1894]: time="2024-06-25T16:20:41.345103244Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.377275654s" Jun 25 16:20:41.345242 containerd[1894]: time="2024-06-25T16:20:41.345151486Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:20:41.351135 containerd[1894]: time="2024-06-25T16:20:41.351070960Z" level=info msg="CreateContainer within sandbox \"64627de833179c540adf0bc9b77acd715669f3d8b352ee7dd6a2d38d3221d8f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:20:41.377842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337713446.mount: Deactivated successfully. Jun 25 16:20:41.390681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787991664.mount: Deactivated successfully. Jun 25 16:20:41.392937 containerd[1894]: time="2024-06-25T16:20:41.392893402Z" level=info msg="CreateContainer within sandbox \"64627de833179c540adf0bc9b77acd715669f3d8b352ee7dd6a2d38d3221d8f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7cfd38343a253e5593d742c07b1d23fa93af9ef669bcf492d088d06d9404dd6d\"" Jun 25 16:20:41.394741 containerd[1894]: time="2024-06-25T16:20:41.394701202Z" level=info msg="StartContainer for \"7cfd38343a253e5593d742c07b1d23fa93af9ef669bcf492d088d06d9404dd6d\"" Jun 25 16:20:41.482357 containerd[1894]: time="2024-06-25T16:20:41.482300137Z" level=info msg="StartContainer for \"7cfd38343a253e5593d742c07b1d23fa93af9ef669bcf492d088d06d9404dd6d\" returns successfully" Jun 25 16:20:44.540000 audit[3721]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:44.543103 kernel: kauditd_printk_skb: 143 callbacks suppressed Jun 25 16:20:44.543224 kernel: audit: type=1325 audit(1719332444.540:278): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:44.540000 audit[3721]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc8adeb500 a2=0 a3=7ffc8adeb4ec items=0 ppid=3494 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:44.547136 kernel: audit: type=1300 audit(1719332444.540:278): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc8adeb500 a2=0 a3=7ffc8adeb4ec items=0 ppid=3494 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:44.547275 kernel: audit: type=1327 audit(1719332444.540:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:44.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:44.541000 audit[3721]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:44.553080 kernel: audit: type=1325 audit(1719332444.541:279): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:44.541000 audit[3721]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc8adeb500 a2=0 a3=0 items=0 ppid=3494 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:44.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:44.559206 kernel: audit: type=1300 audit(1719332444.541:279): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc8adeb500 a2=0 a3=0 items=0 ppid=3494 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:44.559374 kernel: audit: type=1327 audit(1719332444.541:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:44.562000 audit[3723]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:44.562000 audit[3723]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd4a099cc0 a2=0 a3=7ffd4a099cac items=0 ppid=3494 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:44.568365 kernel: audit: type=1325 audit(1719332444.562:280): table=filter:91 family=2 entries=16 op=nft_register_rule pid=3723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:44.568479 kernel: audit: type=1300 audit(1719332444.562:280): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd4a099cc0 a2=0 a3=7ffd4a099cac items=0 ppid=3494 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:44.568516 kernel: audit: type=1327 audit(1719332444.562:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:44.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:44.563000 audit[3723]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:44.563000 audit[3723]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd4a099cc0 a2=0 a3=0 items=0 ppid=3494 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:44.563000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:44.574176 kernel: audit: type=1325 audit(1719332444.563:281): table=nat:92 family=2 entries=12 op=nft_register_rule pid=3723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:44.693887 kubelet[3353]: I0625 16:20:44.693842 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-cvw9j" podStartSLOduration=4.315020693 podCreationTimestamp="2024-06-25 16:20:38 +0000 UTC" firstStartedPulling="2024-06-25 16:20:38.966788059 +0000 UTC m=+12.727991809" lastFinishedPulling="2024-06-25 16:20:41.345549412 +0000 UTC m=+15.106753178" observedRunningTime="2024-06-25 16:20:41.902264265 +0000 UTC m=+15.663468035" watchObservedRunningTime="2024-06-25 16:20:44.693782062 +0000 UTC m=+18.454985874" Jun 25 16:20:44.694778 kubelet[3353]: I0625 16:20:44.694093 3353 topology_manager.go:215] "Topology Admit Handler" podUID="cc6f5539-70a6-4fce-a5c3-f9922c339c2e" podNamespace="calico-system" podName="calico-typha-6df6fb56db-fmmr7" Jun 25 16:20:44.711450 kubelet[3353]: W0625 16:20:44.711415 3353 reflector.go:535] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-30-52" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-52' and this object Jun 25 16:20:44.711719 kubelet[3353]: E0625 16:20:44.711695 3353 reflector.go:147] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-30-52" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-52' and this object Jun 25 16:20:44.711719 kubelet[3353]: W0625 16:20:44.711624 3353 reflector.go:535] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-52" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-52' and this object Jun 25 16:20:44.711883 kubelet[3353]: E0625 16:20:44.711733 3353 reflector.go:147] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-52" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-52' and this object Jun 25 16:20:44.711883 kubelet[3353]: W0625 16:20:44.711671 3353 reflector.go:535] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-30-52" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-52' and this object Jun 25 16:20:44.711883 kubelet[3353]: E0625 16:20:44.711754 3353 reflector.go:147] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-30-52" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-52' and this object Jun 25 16:20:44.747272 kubelet[3353]: I0625 16:20:44.747032 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6f5539-70a6-4fce-a5c3-f9922c339c2e-tigera-ca-bundle\") pod \"calico-typha-6df6fb56db-fmmr7\" (UID: \"cc6f5539-70a6-4fce-a5c3-f9922c339c2e\") " pod="calico-system/calico-typha-6df6fb56db-fmmr7" Jun 25 16:20:44.747764 kubelet[3353]: I0625 16:20:44.747720 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc6f5539-70a6-4fce-a5c3-f9922c339c2e-typha-certs\") pod \"calico-typha-6df6fb56db-fmmr7\" (UID: \"cc6f5539-70a6-4fce-a5c3-f9922c339c2e\") " pod="calico-system/calico-typha-6df6fb56db-fmmr7" Jun 25 16:20:44.747993 kubelet[3353]: I0625 16:20:44.747979 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttltg\" (UniqueName: \"kubernetes.io/projected/cc6f5539-70a6-4fce-a5c3-f9922c339c2e-kube-api-access-ttltg\") pod \"calico-typha-6df6fb56db-fmmr7\" (UID: \"cc6f5539-70a6-4fce-a5c3-f9922c339c2e\") " pod="calico-system/calico-typha-6df6fb56db-fmmr7" Jun 25 16:20:44.794013 kubelet[3353]: I0625 16:20:44.793844 3353 topology_manager.go:215] "Topology Admit Handler" podUID="64b645d3-ddef-4ef3-b035-161e91c52d46" podNamespace="calico-system" podName="calico-node-t8jhg" Jun 25 16:20:44.849046 kubelet[3353]: I0625 16:20:44.849005 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64b645d3-ddef-4ef3-b035-161e91c52d46-var-lib-calico\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849238 kubelet[3353]: I0625 16:20:44.849090 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/64b645d3-ddef-4ef3-b035-161e91c52d46-cni-net-dir\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849238 kubelet[3353]: I0625 16:20:44.849144 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64b645d3-ddef-4ef3-b035-161e91c52d46-xtables-lock\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849238 kubelet[3353]: I0625 16:20:44.849171 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/64b645d3-ddef-4ef3-b035-161e91c52d46-policysync\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849238 kubelet[3353]: I0625 16:20:44.849200 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b645d3-ddef-4ef3-b035-161e91c52d46-tigera-ca-bundle\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849238 kubelet[3353]: I0625 16:20:44.849230 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/64b645d3-ddef-4ef3-b035-161e91c52d46-node-certs\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849476 kubelet[3353]: I0625 16:20:44.849265 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbcpb\" (UniqueName: \"kubernetes.io/projected/64b645d3-ddef-4ef3-b035-161e91c52d46-kube-api-access-zbcpb\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849476 kubelet[3353]: I0625 16:20:44.849318 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/64b645d3-ddef-4ef3-b035-161e91c52d46-cni-log-dir\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849476 kubelet[3353]: I0625 16:20:44.849365 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/64b645d3-ddef-4ef3-b035-161e91c52d46-flexvol-driver-host\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849476 kubelet[3353]: I0625 16:20:44.849400 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64b645d3-ddef-4ef3-b035-161e91c52d46-lib-modules\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849476 kubelet[3353]: I0625 16:20:44.849433 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/64b645d3-ddef-4ef3-b035-161e91c52d46-var-run-calico\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.849775 kubelet[3353]: I0625 16:20:44.849462 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/64b645d3-ddef-4ef3-b035-161e91c52d46-cni-bin-dir\") pod \"calico-node-t8jhg\" (UID: \"64b645d3-ddef-4ef3-b035-161e91c52d46\") " pod="calico-system/calico-node-t8jhg" Jun 25 16:20:44.928525 kubelet[3353]: I0625 16:20:44.928487 3353 topology_manager.go:215] "Topology Admit Handler" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" podNamespace="calico-system" podName="csi-node-driver-l82ff" Jun 25 16:20:44.929330 kubelet[3353]: E0625 16:20:44.929308 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l82ff" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" Jun 25 16:20:44.970106 kubelet[3353]: E0625 16:20:44.970049 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.970274 kubelet[3353]: W0625 16:20:44.970109 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.973503 kubelet[3353]: E0625 16:20:44.973285 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.975945 kubelet[3353]: E0625 16:20:44.975902 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.975945 kubelet[3353]: W0625 16:20:44.975941 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.976156 kubelet[3353]: E0625 16:20:44.975978 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.976437 kubelet[3353]: E0625 16:20:44.976421 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.976516 kubelet[3353]: W0625 16:20:44.976439 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.976516 kubelet[3353]: E0625 16:20:44.976479 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.977044 kubelet[3353]: E0625 16:20:44.977015 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.977044 kubelet[3353]: W0625 16:20:44.977032 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.977196 kubelet[3353]: E0625 16:20:44.977078 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.977348 kubelet[3353]: E0625 16:20:44.977329 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.977348 kubelet[3353]: W0625 16:20:44.977344 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.977466 kubelet[3353]: E0625 16:20:44.977449 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.977806 kubelet[3353]: E0625 16:20:44.977789 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.977806 kubelet[3353]: W0625 16:20:44.977806 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.977942 kubelet[3353]: E0625 16:20:44.977915 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.978174 kubelet[3353]: E0625 16:20:44.978160 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.978174 kubelet[3353]: W0625 16:20:44.978173 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.978350 kubelet[3353]: E0625 16:20:44.978273 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.979553 kubelet[3353]: E0625 16:20:44.978644 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.979553 kubelet[3353]: W0625 16:20:44.978859 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.979553 kubelet[3353]: E0625 16:20:44.978886 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.979553 kubelet[3353]: E0625 16:20:44.979274 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.979553 kubelet[3353]: W0625 16:20:44.979285 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.979553 kubelet[3353]: E0625 16:20:44.979339 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.979918 kubelet[3353]: E0625 16:20:44.979616 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.979918 kubelet[3353]: W0625 16:20:44.979626 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.979918 kubelet[3353]: E0625 16:20:44.979743 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.982424 kubelet[3353]: E0625 16:20:44.982402 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.982424 kubelet[3353]: W0625 16:20:44.982424 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.982580 kubelet[3353]: E0625 16:20:44.982449 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.984652 kubelet[3353]: E0625 16:20:44.984618 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.984652 kubelet[3353]: W0625 16:20:44.984639 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.984789 kubelet[3353]: E0625 16:20:44.984779 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:44.984958 kubelet[3353]: E0625 16:20:44.984941 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:44.984958 kubelet[3353]: W0625 16:20:44.984958 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:44.985073 kubelet[3353]: E0625 16:20:44.984974 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.024094 kubelet[3353]: E0625 16:20:45.024045 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.024094 kubelet[3353]: W0625 16:20:45.024089 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.024536 kubelet[3353]: E0625 16:20:45.024118 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.024766 kubelet[3353]: E0625 16:20:45.024733 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.024766 kubelet[3353]: W0625 16:20:45.024747 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.025021 kubelet[3353]: E0625 16:20:45.024769 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.025579 kubelet[3353]: E0625 16:20:45.025429 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.025579 kubelet[3353]: W0625 16:20:45.025448 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.025706 kubelet[3353]: E0625 16:20:45.025603 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.025840 kubelet[3353]: E0625 16:20:45.025823 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.025896 kubelet[3353]: W0625 16:20:45.025841 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.025896 kubelet[3353]: E0625 16:20:45.025858 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.026246 kubelet[3353]: E0625 16:20:45.026229 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.026400 kubelet[3353]: W0625 16:20:45.026254 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.026400 kubelet[3353]: E0625 16:20:45.026272 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.026677 kubelet[3353]: E0625 16:20:45.026660 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.026833 kubelet[3353]: W0625 16:20:45.026678 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.026833 kubelet[3353]: E0625 16:20:45.026695 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.027100 kubelet[3353]: E0625 16:20:45.027086 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.027166 kubelet[3353]: W0625 16:20:45.027101 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.027166 kubelet[3353]: E0625 16:20:45.027121 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.027332 kubelet[3353]: E0625 16:20:45.027318 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.027381 kubelet[3353]: W0625 16:20:45.027332 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.027381 kubelet[3353]: E0625 16:20:45.027348 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.027561 kubelet[3353]: E0625 16:20:45.027546 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.027617 kubelet[3353]: W0625 16:20:45.027562 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.027617 kubelet[3353]: E0625 16:20:45.027578 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.027778 kubelet[3353]: E0625 16:20:45.027765 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.027830 kubelet[3353]: W0625 16:20:45.027779 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.027830 kubelet[3353]: E0625 16:20:45.027795 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.027988 kubelet[3353]: E0625 16:20:45.027974 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.028049 kubelet[3353]: W0625 16:20:45.027989 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.028049 kubelet[3353]: E0625 16:20:45.028004 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.028244 kubelet[3353]: E0625 16:20:45.028231 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.028298 kubelet[3353]: W0625 16:20:45.028245 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.028298 kubelet[3353]: E0625 16:20:45.028261 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.028471 kubelet[3353]: E0625 16:20:45.028456 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.028536 kubelet[3353]: W0625 16:20:45.028472 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.028536 kubelet[3353]: E0625 16:20:45.028488 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.029141 kubelet[3353]: E0625 16:20:45.028673 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.029141 kubelet[3353]: W0625 16:20:45.028683 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.029141 kubelet[3353]: E0625 16:20:45.028700 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.029141 kubelet[3353]: E0625 16:20:45.028879 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.029141 kubelet[3353]: W0625 16:20:45.028887 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.029141 kubelet[3353]: E0625 16:20:45.028901 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.029141 kubelet[3353]: E0625 16:20:45.029098 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.029141 kubelet[3353]: W0625 16:20:45.029106 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.029141 kubelet[3353]: E0625 16:20:45.029121 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.029661 kubelet[3353]: E0625 16:20:45.029419 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.029661 kubelet[3353]: W0625 16:20:45.029429 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.029661 kubelet[3353]: E0625 16:20:45.029446 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.029661 kubelet[3353]: E0625 16:20:45.029621 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.029661 kubelet[3353]: W0625 16:20:45.029630 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.029661 kubelet[3353]: E0625 16:20:45.029644 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.030158 kubelet[3353]: E0625 16:20:45.030019 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.030158 kubelet[3353]: W0625 16:20:45.030032 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.030158 kubelet[3353]: E0625 16:20:45.030051 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.031085 kubelet[3353]: E0625 16:20:45.030537 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.031085 kubelet[3353]: W0625 16:20:45.030551 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.031085 kubelet[3353]: E0625 16:20:45.030567 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.056379 kubelet[3353]: E0625 16:20:45.055323 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.056379 kubelet[3353]: W0625 16:20:45.056337 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.056379 kubelet[3353]: E0625 16:20:45.056381 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.061092 kubelet[3353]: E0625 16:20:45.056821 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.061092 kubelet[3353]: W0625 16:20:45.056841 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.061092 kubelet[3353]: E0625 16:20:45.056865 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.061092 kubelet[3353]: E0625 16:20:45.057154 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.061092 kubelet[3353]: W0625 16:20:45.057164 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.061092 kubelet[3353]: E0625 16:20:45.057180 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.061092 kubelet[3353]: I0625 16:20:45.057219 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7047a1f0-d856-4ebc-9890-f4acf3b2eb78-registration-dir\") pod \"csi-node-driver-l82ff\" (UID: \"7047a1f0-d856-4ebc-9890-f4acf3b2eb78\") " pod="calico-system/csi-node-driver-l82ff" Jun 25 16:20:45.061092 kubelet[3353]: E0625 16:20:45.057942 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.061092 kubelet[3353]: W0625 16:20:45.057956 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.061839 kubelet[3353]: E0625 16:20:45.057980 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.061839 kubelet[3353]: E0625 16:20:45.058228 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.061839 kubelet[3353]: W0625 16:20:45.058237 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.061839 kubelet[3353]: E0625 16:20:45.058255 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.061839 kubelet[3353]: E0625 16:20:45.058883 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.061839 kubelet[3353]: W0625 16:20:45.058895 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.061839 kubelet[3353]: E0625 16:20:45.058919 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.061839 kubelet[3353]: E0625 16:20:45.059152 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.061839 kubelet[3353]: W0625 16:20:45.059161 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.061839 kubelet[3353]: E0625 16:20:45.059473 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.062297 kubelet[3353]: I0625 16:20:45.059647 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7047a1f0-d856-4ebc-9890-f4acf3b2eb78-kubelet-dir\") pod \"csi-node-driver-l82ff\" (UID: \"7047a1f0-d856-4ebc-9890-f4acf3b2eb78\") " pod="calico-system/csi-node-driver-l82ff" Jun 25 16:20:45.062297 kubelet[3353]: E0625 16:20:45.059852 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.062297 kubelet[3353]: W0625 16:20:45.059862 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.062297 kubelet[3353]: E0625 16:20:45.059884 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.062297 kubelet[3353]: E0625 16:20:45.060128 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.062297 kubelet[3353]: W0625 16:20:45.060141 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.062297 kubelet[3353]: E0625 16:20:45.060186 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.086470 kubelet[3353]: E0625 16:20:45.086443 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.086681 kubelet[3353]: W0625 16:20:45.086663 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.086787 kubelet[3353]: E0625 16:20:45.086777 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.089038 kubelet[3353]: I0625 16:20:45.089014 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7047a1f0-d856-4ebc-9890-f4acf3b2eb78-varrun\") pod \"csi-node-driver-l82ff\" (UID: \"7047a1f0-d856-4ebc-9890-f4acf3b2eb78\") " pod="calico-system/csi-node-driver-l82ff" Jun 25 16:20:45.096095 kubelet[3353]: E0625 16:20:45.095986 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.096339 kubelet[3353]: W0625 16:20:45.096317 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.096460 kubelet[3353]: E0625 16:20:45.096447 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.096993 kubelet[3353]: E0625 16:20:45.096970 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.097134 kubelet[3353]: W0625 16:20:45.097119 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.097238 kubelet[3353]: E0625 16:20:45.097227 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.097571 kubelet[3353]: E0625 16:20:45.097558 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.097680 kubelet[3353]: W0625 16:20:45.097667 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.097767 kubelet[3353]: E0625 16:20:45.097758 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.097953 kubelet[3353]: I0625 16:20:45.097940 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7047a1f0-d856-4ebc-9890-f4acf3b2eb78-socket-dir\") pod \"csi-node-driver-l82ff\" (UID: \"7047a1f0-d856-4ebc-9890-f4acf3b2eb78\") " pod="calico-system/csi-node-driver-l82ff" Jun 25 16:20:45.098613 kubelet[3353]: E0625 16:20:45.098599 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.098706 kubelet[3353]: W0625 16:20:45.098694 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.098788 kubelet[3353]: E0625 16:20:45.098779 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.098985 kubelet[3353]: I0625 16:20:45.098973 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lldhq\" (UniqueName: \"kubernetes.io/projected/7047a1f0-d856-4ebc-9890-f4acf3b2eb78-kube-api-access-lldhq\") pod \"csi-node-driver-l82ff\" (UID: \"7047a1f0-d856-4ebc-9890-f4acf3b2eb78\") " pod="calico-system/csi-node-driver-l82ff" Jun 25 16:20:45.100610 kubelet[3353]: E0625 16:20:45.100593 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.100722 kubelet[3353]: W0625 16:20:45.100709 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.100969 kubelet[3353]: E0625 16:20:45.100954 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.101254 kubelet[3353]: E0625 16:20:45.101243 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.101421 kubelet[3353]: W0625 16:20:45.101408 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.101619 kubelet[3353]: E0625 16:20:45.101607 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.101965 kubelet[3353]: E0625 16:20:45.101953 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.102073 kubelet[3353]: W0625 16:20:45.102048 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.102484 kubelet[3353]: E0625 16:20:45.102467 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.102924 kubelet[3353]: E0625 16:20:45.102910 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.103749 kubelet[3353]: W0625 16:20:45.103735 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.103864 kubelet[3353]: E0625 16:20:45.103853 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.104271 kubelet[3353]: E0625 16:20:45.104260 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.104341 kubelet[3353]: W0625 16:20:45.104333 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.113221 kubelet[3353]: E0625 16:20:45.113189 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.115268 kubelet[3353]: E0625 16:20:45.115174 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.115467 kubelet[3353]: W0625 16:20:45.115450 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.115580 kubelet[3353]: E0625 16:20:45.115571 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.200506 kubelet[3353]: E0625 16:20:45.200478 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.200757 kubelet[3353]: W0625 16:20:45.200736 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.200859 kubelet[3353]: E0625 16:20:45.200851 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.201344 kubelet[3353]: E0625 16:20:45.201329 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.201767 kubelet[3353]: W0625 16:20:45.201747 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.201897 kubelet[3353]: E0625 16:20:45.201887 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.202301 kubelet[3353]: E0625 16:20:45.202289 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.202457 kubelet[3353]: W0625 16:20:45.202380 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.202559 kubelet[3353]: E0625 16:20:45.202548 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.204296 kubelet[3353]: E0625 16:20:45.204282 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.204416 kubelet[3353]: W0625 16:20:45.204403 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.204507 kubelet[3353]: E0625 16:20:45.204499 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.204850 kubelet[3353]: E0625 16:20:45.204838 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.205096 kubelet[3353]: W0625 16:20:45.205045 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.205211 kubelet[3353]: E0625 16:20:45.205200 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.205501 kubelet[3353]: E0625 16:20:45.205490 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.205586 kubelet[3353]: W0625 16:20:45.205574 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.205739 kubelet[3353]: E0625 16:20:45.205731 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.206038 kubelet[3353]: E0625 16:20:45.206026 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.206171 kubelet[3353]: W0625 16:20:45.206157 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.206254 kubelet[3353]: E0625 16:20:45.206244 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.206683 kubelet[3353]: E0625 16:20:45.206670 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.206844 kubelet[3353]: W0625 16:20:45.206827 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.206969 kubelet[3353]: E0625 16:20:45.206957 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.207322 kubelet[3353]: E0625 16:20:45.207310 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.207415 kubelet[3353]: W0625 16:20:45.207403 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.207505 kubelet[3353]: E0625 16:20:45.207496 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.207951 kubelet[3353]: E0625 16:20:45.207939 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.208281 kubelet[3353]: W0625 16:20:45.208263 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.208399 kubelet[3353]: E0625 16:20:45.208389 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.208819 kubelet[3353]: E0625 16:20:45.208808 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.208989 kubelet[3353]: W0625 16:20:45.208976 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.209181 kubelet[3353]: E0625 16:20:45.209171 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.209474 kubelet[3353]: E0625 16:20:45.209462 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.209564 kubelet[3353]: W0625 16:20:45.209552 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.209657 kubelet[3353]: E0625 16:20:45.209649 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.209990 kubelet[3353]: E0625 16:20:45.209979 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.210111 kubelet[3353]: W0625 16:20:45.210099 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.210222 kubelet[3353]: E0625 16:20:45.210213 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.210637 kubelet[3353]: E0625 16:20:45.210613 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.210735 kubelet[3353]: W0625 16:20:45.210722 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.210830 kubelet[3353]: E0625 16:20:45.210821 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.211192 kubelet[3353]: E0625 16:20:45.211181 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.211322 kubelet[3353]: W0625 16:20:45.211309 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.211446 kubelet[3353]: E0625 16:20:45.211400 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.217385 kubelet[3353]: E0625 16:20:45.217356 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.219092 kubelet[3353]: W0625 16:20:45.219042 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.219268 kubelet[3353]: E0625 16:20:45.219255 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.219775 kubelet[3353]: E0625 16:20:45.219758 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.219899 kubelet[3353]: W0625 16:20:45.219887 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.219986 kubelet[3353]: E0625 16:20:45.219977 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.220585 kubelet[3353]: E0625 16:20:45.220573 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.220703 kubelet[3353]: W0625 16:20:45.220691 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.220795 kubelet[3353]: E0625 16:20:45.220787 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.221179 kubelet[3353]: E0625 16:20:45.221166 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.221284 kubelet[3353]: W0625 16:20:45.221272 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.221430 kubelet[3353]: E0625 16:20:45.221356 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.222012 kubelet[3353]: E0625 16:20:45.221989 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.222517 kubelet[3353]: W0625 16:20:45.222378 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.222739 kubelet[3353]: E0625 16:20:45.222727 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.223635 kubelet[3353]: E0625 16:20:45.223556 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.223736 kubelet[3353]: W0625 16:20:45.223722 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.224666 kubelet[3353]: E0625 16:20:45.223822 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.225671 kubelet[3353]: E0625 16:20:45.225658 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.225770 kubelet[3353]: W0625 16:20:45.225756 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.225999 kubelet[3353]: E0625 16:20:45.225984 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.226166 kubelet[3353]: E0625 16:20:45.226156 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.226639 kubelet[3353]: W0625 16:20:45.226334 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.226848 kubelet[3353]: E0625 16:20:45.226838 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.227114 kubelet[3353]: E0625 16:20:45.227105 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.227194 kubelet[3353]: W0625 16:20:45.227184 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.227360 kubelet[3353]: E0625 16:20:45.227347 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.227492 kubelet[3353]: E0625 16:20:45.227484 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.227634 kubelet[3353]: W0625 16:20:45.227554 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.227852 kubelet[3353]: E0625 16:20:45.227838 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.228009 kubelet[3353]: E0625 16:20:45.228000 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.228104 kubelet[3353]: W0625 16:20:45.228094 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.228191 kubelet[3353]: E0625 16:20:45.228184 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.228475 kubelet[3353]: E0625 16:20:45.228465 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.228669 kubelet[3353]: W0625 16:20:45.228658 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.228746 kubelet[3353]: E0625 16:20:45.228739 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.229245 kubelet[3353]: E0625 16:20:45.229233 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.229354 kubelet[3353]: W0625 16:20:45.229344 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.229513 kubelet[3353]: E0625 16:20:45.229502 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.229834 kubelet[3353]: E0625 16:20:45.229824 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.229923 kubelet[3353]: W0625 16:20:45.229913 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.230182 kubelet[3353]: E0625 16:20:45.230171 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.230603 kubelet[3353]: E0625 16:20:45.230591 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.230702 kubelet[3353]: W0625 16:20:45.230691 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.230786 kubelet[3353]: E0625 16:20:45.230778 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.325556 kubelet[3353]: E0625 16:20:45.325453 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.325556 kubelet[3353]: W0625 16:20:45.325480 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.325556 kubelet[3353]: E0625 16:20:45.325511 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.326274 kubelet[3353]: E0625 16:20:45.325833 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.326274 kubelet[3353]: W0625 16:20:45.325906 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.326274 kubelet[3353]: E0625 16:20:45.325929 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.327334 kubelet[3353]: E0625 16:20:45.326831 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.327334 kubelet[3353]: W0625 16:20:45.327133 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.327334 kubelet[3353]: E0625 16:20:45.327156 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.327870 kubelet[3353]: E0625 16:20:45.327849 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.327870 kubelet[3353]: W0625 16:20:45.327861 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.327969 kubelet[3353]: E0625 16:20:45.327880 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.332618 kubelet[3353]: E0625 16:20:45.328121 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.332618 kubelet[3353]: W0625 16:20:45.328133 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.332618 kubelet[3353]: E0625 16:20:45.328149 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.332618 kubelet[3353]: E0625 16:20:45.328361 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.332618 kubelet[3353]: W0625 16:20:45.328370 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.332618 kubelet[3353]: E0625 16:20:45.328402 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.429590 kubelet[3353]: E0625 16:20:45.429558 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.429590 kubelet[3353]: W0625 16:20:45.429586 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.429805 kubelet[3353]: E0625 16:20:45.429614 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.429899 kubelet[3353]: E0625 16:20:45.429878 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.429949 kubelet[3353]: W0625 16:20:45.429901 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.429949 kubelet[3353]: E0625 16:20:45.429920 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.430167 kubelet[3353]: E0625 16:20:45.430152 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.430228 kubelet[3353]: W0625 16:20:45.430167 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.430228 kubelet[3353]: E0625 16:20:45.430184 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.430491 kubelet[3353]: E0625 16:20:45.430476 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.430557 kubelet[3353]: W0625 16:20:45.430492 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.430557 kubelet[3353]: E0625 16:20:45.430510 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.430730 kubelet[3353]: E0625 16:20:45.430716 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.430781 kubelet[3353]: W0625 16:20:45.430731 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.430781 kubelet[3353]: E0625 16:20:45.430748 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.430955 kubelet[3353]: E0625 16:20:45.430942 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.431007 kubelet[3353]: W0625 16:20:45.430971 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.431007 kubelet[3353]: E0625 16:20:45.430988 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.531587 kubelet[3353]: E0625 16:20:45.531552 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.531587 kubelet[3353]: W0625 16:20:45.531583 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.531836 kubelet[3353]: E0625 16:20:45.531610 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.531889 kubelet[3353]: E0625 16:20:45.531849 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.531889 kubelet[3353]: W0625 16:20:45.531860 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.531889 kubelet[3353]: E0625 16:20:45.531877 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.532126 kubelet[3353]: E0625 16:20:45.532111 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.532126 kubelet[3353]: W0625 16:20:45.532125 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.532293 kubelet[3353]: E0625 16:20:45.532141 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.532375 kubelet[3353]: E0625 16:20:45.532361 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.532440 kubelet[3353]: W0625 16:20:45.532374 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.532440 kubelet[3353]: E0625 16:20:45.532390 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.532619 kubelet[3353]: E0625 16:20:45.532604 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.532673 kubelet[3353]: W0625 16:20:45.532620 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.532673 kubelet[3353]: E0625 16:20:45.532636 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.532844 kubelet[3353]: E0625 16:20:45.532829 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.532895 kubelet[3353]: W0625 16:20:45.532844 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.532895 kubelet[3353]: E0625 16:20:45.532862 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.579000 audit[3827]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=3827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:45.579000 audit[3827]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff63cb0440 a2=0 a3=7fff63cb042c items=0 ppid=3494 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:45.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:45.582000 audit[3827]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:45.582000 audit[3827]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff63cb0440 a2=0 a3=0 items=0 ppid=3494 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:45.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:45.634405 kubelet[3353]: E0625 16:20:45.634374 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.634405 kubelet[3353]: W0625 16:20:45.634395 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.634668 kubelet[3353]: E0625 16:20:45.634422 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.634913 kubelet[3353]: E0625 16:20:45.634892 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.634913 kubelet[3353]: W0625 16:20:45.634908 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.635168 kubelet[3353]: E0625 16:20:45.634932 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.635506 kubelet[3353]: E0625 16:20:45.635492 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.635604 kubelet[3353]: W0625 16:20:45.635590 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.635785 kubelet[3353]: E0625 16:20:45.635771 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.636212 kubelet[3353]: E0625 16:20:45.636199 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.636337 kubelet[3353]: W0625 16:20:45.636324 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.636429 kubelet[3353]: E0625 16:20:45.636419 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.636863 kubelet[3353]: E0625 16:20:45.636851 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.637033 kubelet[3353]: W0625 16:20:45.637019 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.637290 kubelet[3353]: E0625 16:20:45.637276 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.637580 kubelet[3353]: E0625 16:20:45.637569 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.637677 kubelet[3353]: W0625 16:20:45.637665 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.637752 kubelet[3353]: E0625 16:20:45.637743 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.739702 kubelet[3353]: E0625 16:20:45.739538 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.739702 kubelet[3353]: W0625 16:20:45.739703 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.740533 kubelet[3353]: E0625 16:20:45.739732 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.740966 kubelet[3353]: E0625 16:20:45.740945 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.740966 kubelet[3353]: W0625 16:20:45.740965 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.741119 kubelet[3353]: E0625 16:20:45.741010 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.741293 kubelet[3353]: E0625 16:20:45.741276 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.741355 kubelet[3353]: W0625 16:20:45.741293 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.741355 kubelet[3353]: E0625 16:20:45.741310 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.741544 kubelet[3353]: E0625 16:20:45.741528 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.741603 kubelet[3353]: W0625 16:20:45.741544 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.741603 kubelet[3353]: E0625 16:20:45.741562 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.741836 kubelet[3353]: E0625 16:20:45.741821 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.741836 kubelet[3353]: W0625 16:20:45.741835 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.741966 kubelet[3353]: E0625 16:20:45.741850 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.742102 kubelet[3353]: E0625 16:20:45.742088 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.742161 kubelet[3353]: W0625 16:20:45.742102 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.742161 kubelet[3353]: E0625 16:20:45.742118 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.826939 kubelet[3353]: E0625 16:20:45.826918 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.827131 kubelet[3353]: W0625 16:20:45.827112 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.827315 kubelet[3353]: E0625 16:20:45.827303 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.862880 kubelet[3353]: E0625 16:20:45.862774 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.863051 kubelet[3353]: W0625 16:20:45.863031 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.863168 kubelet[3353]: E0625 16:20:45.863156 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.867544 kubelet[3353]: E0625 16:20:45.867350 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.867859 kubelet[3353]: W0625 16:20:45.867835 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.868002 kubelet[3353]: E0625 16:20:45.867989 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.871137 kubelet[3353]: E0625 16:20:45.871111 3353 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:20:45.873804 kubelet[3353]: E0625 16:20:45.873778 3353 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc6f5539-70a6-4fce-a5c3-f9922c339c2e-tigera-ca-bundle podName:cc6f5539-70a6-4fce-a5c3-f9922c339c2e nodeName:}" failed. No retries permitted until 2024-06-25 16:20:46.371691844 +0000 UTC m=+20.132895614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/cc6f5539-70a6-4fce-a5c3-f9922c339c2e-tigera-ca-bundle") pod "calico-typha-6df6fb56db-fmmr7" (UID: "cc6f5539-70a6-4fce-a5c3-f9922c339c2e") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:20:45.877268 kubelet[3353]: E0625 16:20:45.877204 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.877636 kubelet[3353]: W0625 16:20:45.877616 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.877856 kubelet[3353]: E0625 16:20:45.877842 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.878502 kubelet[3353]: E0625 16:20:45.878488 3353 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jun 25 16:20:45.878661 kubelet[3353]: E0625 16:20:45.878649 3353 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc6f5539-70a6-4fce-a5c3-f9922c339c2e-typha-certs podName:cc6f5539-70a6-4fce-a5c3-f9922c339c2e nodeName:}" failed. No retries permitted until 2024-06-25 16:20:46.378623629 +0000 UTC m=+20.139827394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/cc6f5539-70a6-4fce-a5c3-f9922c339c2e-typha-certs") pod "calico-typha-6df6fb56db-fmmr7" (UID: "cc6f5539-70a6-4fce-a5c3-f9922c339c2e") : failed to sync secret cache: timed out waiting for the condition Jun 25 16:20:45.882398 kubelet[3353]: E0625 16:20:45.882380 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.882822 kubelet[3353]: W0625 16:20:45.882631 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.882942 kubelet[3353]: E0625 16:20:45.882932 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.883582 kubelet[3353]: E0625 16:20:45.883569 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.883712 kubelet[3353]: W0625 16:20:45.883698 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.883916 kubelet[3353]: E0625 16:20:45.883899 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.951091 kubelet[3353]: E0625 16:20:45.951035 3353 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:20:45.951249 kubelet[3353]: E0625 16:20:45.951144 3353 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/64b645d3-ddef-4ef3-b035-161e91c52d46-tigera-ca-bundle podName:64b645d3-ddef-4ef3-b035-161e91c52d46 nodeName:}" failed. No retries permitted until 2024-06-25 16:20:46.4511211 +0000 UTC m=+20.212324855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/64b645d3-ddef-4ef3-b035-161e91c52d46-tigera-ca-bundle") pod "calico-node-t8jhg" (UID: "64b645d3-ddef-4ef3-b035-161e91c52d46") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:20:45.985477 kubelet[3353]: E0625 16:20:45.985445 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.985477 kubelet[3353]: W0625 16:20:45.985473 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.985738 kubelet[3353]: E0625 16:20:45.985501 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.985844 kubelet[3353]: E0625 16:20:45.985822 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.985844 kubelet[3353]: W0625 16:20:45.985838 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.985964 kubelet[3353]: E0625 16:20:45.985856 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:45.986271 kubelet[3353]: E0625 16:20:45.986239 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:45.986271 kubelet[3353]: W0625 16:20:45.986254 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:45.986407 kubelet[3353]: E0625 16:20:45.986281 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.087166 kubelet[3353]: E0625 16:20:46.087139 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.087166 kubelet[3353]: W0625 16:20:46.087164 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.087369 kubelet[3353]: E0625 16:20:46.087190 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.087696 kubelet[3353]: E0625 16:20:46.087677 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.087696 kubelet[3353]: W0625 16:20:46.087695 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.087865 kubelet[3353]: E0625 16:20:46.087715 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.088019 kubelet[3353]: E0625 16:20:46.088005 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.088084 kubelet[3353]: W0625 16:20:46.088020 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.088084 kubelet[3353]: E0625 16:20:46.088040 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.189744 kubelet[3353]: E0625 16:20:46.189707 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.189744 kubelet[3353]: W0625 16:20:46.189733 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.189995 kubelet[3353]: E0625 16:20:46.189760 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.190258 kubelet[3353]: E0625 16:20:46.190237 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.190258 kubelet[3353]: W0625 16:20:46.190253 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.190398 kubelet[3353]: E0625 16:20:46.190274 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.190637 kubelet[3353]: E0625 16:20:46.190606 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.190637 kubelet[3353]: W0625 16:20:46.190623 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.190755 kubelet[3353]: E0625 16:20:46.190641 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.293201 kubelet[3353]: E0625 16:20:46.293172 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.293201 kubelet[3353]: W0625 16:20:46.293194 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.293590 kubelet[3353]: E0625 16:20:46.293223 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.294120 kubelet[3353]: E0625 16:20:46.293741 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.294214 kubelet[3353]: W0625 16:20:46.294126 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.294214 kubelet[3353]: E0625 16:20:46.294153 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.294481 kubelet[3353]: E0625 16:20:46.294463 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.294481 kubelet[3353]: W0625 16:20:46.294481 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.294585 kubelet[3353]: E0625 16:20:46.294498 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.395443 kubelet[3353]: E0625 16:20:46.395405 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.395443 kubelet[3353]: W0625 16:20:46.395431 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.395880 kubelet[3353]: E0625 16:20:46.395459 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.395967 kubelet[3353]: E0625 16:20:46.395945 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.395967 kubelet[3353]: W0625 16:20:46.395964 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.396088 kubelet[3353]: E0625 16:20:46.395992 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.396304 kubelet[3353]: E0625 16:20:46.396270 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.396304 kubelet[3353]: W0625 16:20:46.396313 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.396449 kubelet[3353]: E0625 16:20:46.396337 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.396621 kubelet[3353]: E0625 16:20:46.396604 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.396711 kubelet[3353]: W0625 16:20:46.396621 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.396711 kubelet[3353]: E0625 16:20:46.396643 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.396973 kubelet[3353]: E0625 16:20:46.396958 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.397128 kubelet[3353]: W0625 16:20:46.396975 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.397205 kubelet[3353]: E0625 16:20:46.397152 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.397310 kubelet[3353]: E0625 16:20:46.397296 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.397383 kubelet[3353]: W0625 16:20:46.397313 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.397383 kubelet[3353]: E0625 16:20:46.397333 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.397587 kubelet[3353]: E0625 16:20:46.397570 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.397587 kubelet[3353]: W0625 16:20:46.397587 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.397730 kubelet[3353]: E0625 16:20:46.397606 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.397833 kubelet[3353]: E0625 16:20:46.397820 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.397905 kubelet[3353]: W0625 16:20:46.397834 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.397905 kubelet[3353]: E0625 16:20:46.397857 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.398128 kubelet[3353]: E0625 16:20:46.398114 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.398128 kubelet[3353]: W0625 16:20:46.398128 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.398254 kubelet[3353]: E0625 16:20:46.398147 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.398923 kubelet[3353]: E0625 16:20:46.398905 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.399022 kubelet[3353]: W0625 16:20:46.398924 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.399022 kubelet[3353]: E0625 16:20:46.398942 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.408091 kubelet[3353]: E0625 16:20:46.402390 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.408091 kubelet[3353]: W0625 16:20:46.402406 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.408091 kubelet[3353]: E0625 16:20:46.402434 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.412090 kubelet[3353]: E0625 16:20:46.411388 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.412090 kubelet[3353]: W0625 16:20:46.411413 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.412090 kubelet[3353]: E0625 16:20:46.411556 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.412090 kubelet[3353]: E0625 16:20:46.411812 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.412090 kubelet[3353]: W0625 16:20:46.411823 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.412090 kubelet[3353]: E0625 16:20:46.411843 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.497522 kubelet[3353]: E0625 16:20:46.497409 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.497522 kubelet[3353]: W0625 16:20:46.497432 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.497522 kubelet[3353]: E0625 16:20:46.497459 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.500944 kubelet[3353]: E0625 16:20:46.500102 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.500944 kubelet[3353]: W0625 16:20:46.500129 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.500944 kubelet[3353]: E0625 16:20:46.500157 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.502073 containerd[1894]: time="2024-06-25T16:20:46.502005195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6df6fb56db-fmmr7,Uid:cc6f5539-70a6-4fce-a5c3-f9922c339c2e,Namespace:calico-system,Attempt:0,}" Jun 25 16:20:46.502724 kubelet[3353]: E0625 16:20:46.502702 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.502724 kubelet[3353]: W0625 16:20:46.502721 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.502882 kubelet[3353]: E0625 16:20:46.502746 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.503401 kubelet[3353]: E0625 16:20:46.503384 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.503492 kubelet[3353]: W0625 16:20:46.503402 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.503492 kubelet[3353]: E0625 16:20:46.503428 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.503854 kubelet[3353]: E0625 16:20:46.503775 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.503854 kubelet[3353]: W0625 16:20:46.503788 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.503854 kubelet[3353]: E0625 16:20:46.503805 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.505532 kubelet[3353]: E0625 16:20:46.505515 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:20:46.505532 kubelet[3353]: W0625 16:20:46.505532 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:20:46.505665 kubelet[3353]: E0625 16:20:46.505550 3353 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:20:46.558004 containerd[1894]: time="2024-06-25T16:20:46.557877956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:46.558177 containerd[1894]: time="2024-06-25T16:20:46.558033709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:46.558177 containerd[1894]: time="2024-06-25T16:20:46.558124707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:46.558305 containerd[1894]: time="2024-06-25T16:20:46.558197858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:46.605265 containerd[1894]: time="2024-06-25T16:20:46.604202487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8jhg,Uid:64b645d3-ddef-4ef3-b035-161e91c52d46,Namespace:calico-system,Attempt:0,}" Jun 25 16:20:46.713033 containerd[1894]: time="2024-06-25T16:20:46.707642554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:46.713033 containerd[1894]: time="2024-06-25T16:20:46.712179581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:46.713033 containerd[1894]: time="2024-06-25T16:20:46.712344776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:46.713033 containerd[1894]: time="2024-06-25T16:20:46.712365977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:46.728086 kubelet[3353]: E0625 16:20:46.726267 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l82ff" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" Jun 25 16:20:46.917562 containerd[1894]: time="2024-06-25T16:20:46.917513231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8jhg,Uid:64b645d3-ddef-4ef3-b035-161e91c52d46,Namespace:calico-system,Attempt:0,} returns sandbox id \"234273783192ed26fc5bdb6be89c0a89e313b096ea8a36ea4a23a3f356481184\"" Jun 25 16:20:46.921284 containerd[1894]: time="2024-06-25T16:20:46.921238426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:20:46.923249 containerd[1894]: time="2024-06-25T16:20:46.923197969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6df6fb56db-fmmr7,Uid:cc6f5539-70a6-4fce-a5c3-f9922c339c2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf2282fba9d1cf4540114f19760f26132ca00b4ca0ae79f4767d489d9657e006\"" Jun 25 16:20:48.459267 containerd[1894]: time="2024-06-25T16:20:48.459220138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:48.460918 containerd[1894]: time="2024-06-25T16:20:48.460858747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:20:48.462982 containerd[1894]: time="2024-06-25T16:20:48.462947274Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:48.465704 containerd[1894]: time="2024-06-25T16:20:48.465669433Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:48.468299 containerd[1894]: time="2024-06-25T16:20:48.468263167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:48.469082 containerd[1894]: time="2024-06-25T16:20:48.469009830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.547716678s" Jun 25 16:20:48.469180 containerd[1894]: time="2024-06-25T16:20:48.469082282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:20:48.473388 containerd[1894]: time="2024-06-25T16:20:48.470592156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:20:48.473388 containerd[1894]: time="2024-06-25T16:20:48.472691160Z" level=info msg="CreateContainer within sandbox \"234273783192ed26fc5bdb6be89c0a89e313b096ea8a36ea4a23a3f356481184\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:20:48.519198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806249651.mount: Deactivated successfully. Jun 25 16:20:48.567432 containerd[1894]: time="2024-06-25T16:20:48.567347128Z" level=info msg="CreateContainer within sandbox \"234273783192ed26fc5bdb6be89c0a89e313b096ea8a36ea4a23a3f356481184\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"640f381d4d0ac21581e2a17c11188fb49457a42cc2b386026d1887cc16095f03\"" Jun 25 16:20:48.568545 containerd[1894]: time="2024-06-25T16:20:48.568510172Z" level=info msg="StartContainer for \"640f381d4d0ac21581e2a17c11188fb49457a42cc2b386026d1887cc16095f03\"" Jun 25 16:20:48.673546 systemd[1]: run-containerd-runc-k8s.io-640f381d4d0ac21581e2a17c11188fb49457a42cc2b386026d1887cc16095f03-runc.Dne7ub.mount: Deactivated successfully. Jun 25 16:20:48.721786 kubelet[3353]: E0625 16:20:48.720319 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l82ff" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" Jun 25 16:20:48.789080 containerd[1894]: time="2024-06-25T16:20:48.788327238Z" level=info msg="StartContainer for \"640f381d4d0ac21581e2a17c11188fb49457a42cc2b386026d1887cc16095f03\" returns successfully" Jun 25 16:20:49.015544 containerd[1894]: time="2024-06-25T16:20:48.935230474Z" level=info msg="shim disconnected" id=640f381d4d0ac21581e2a17c11188fb49457a42cc2b386026d1887cc16095f03 namespace=k8s.io Jun 25 16:20:49.015544 containerd[1894]: time="2024-06-25T16:20:49.015419940Z" level=warning msg="cleaning up after shim disconnected" id=640f381d4d0ac21581e2a17c11188fb49457a42cc2b386026d1887cc16095f03 namespace=k8s.io Jun 25 16:20:49.015544 containerd[1894]: time="2024-06-25T16:20:49.015487916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:20:49.516605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-640f381d4d0ac21581e2a17c11188fb49457a42cc2b386026d1887cc16095f03-rootfs.mount: Deactivated successfully. Jun 25 16:20:50.721304 kubelet[3353]: E0625 16:20:50.720458 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l82ff" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" Jun 25 16:20:51.290034 containerd[1894]: time="2024-06-25T16:20:51.289978265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:51.293155 containerd[1894]: time="2024-06-25T16:20:51.293076331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:20:51.294849 containerd[1894]: time="2024-06-25T16:20:51.294802217Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:51.300507 containerd[1894]: time="2024-06-25T16:20:51.300396859Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:51.305002 containerd[1894]: time="2024-06-25T16:20:51.304946767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:51.309667 containerd[1894]: time="2024-06-25T16:20:51.309556635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.838847623s" Jun 25 16:20:51.310644 containerd[1894]: time="2024-06-25T16:20:51.310588707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:20:51.330738 containerd[1894]: time="2024-06-25T16:20:51.330684987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:20:51.340977 containerd[1894]: time="2024-06-25T16:20:51.340843794Z" level=info msg="CreateContainer within sandbox \"cf2282fba9d1cf4540114f19760f26132ca00b4ca0ae79f4767d489d9657e006\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:20:51.381313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245530638.mount: Deactivated successfully. Jun 25 16:20:51.392139 containerd[1894]: time="2024-06-25T16:20:51.392051683Z" level=info msg="CreateContainer within sandbox \"cf2282fba9d1cf4540114f19760f26132ca00b4ca0ae79f4767d489d9657e006\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e66a6930623d0267d5056b3a2a8ba6c21dfcbdde74510b81b383e72fbb58f3b5\"" Jun 25 16:20:51.394964 containerd[1894]: time="2024-06-25T16:20:51.394143885Z" level=info msg="StartContainer for \"e66a6930623d0267d5056b3a2a8ba6c21dfcbdde74510b81b383e72fbb58f3b5\"" Jun 25 16:20:51.508050 containerd[1894]: time="2024-06-25T16:20:51.508002496Z" level=info msg="StartContainer for \"e66a6930623d0267d5056b3a2a8ba6c21dfcbdde74510b81b383e72fbb58f3b5\" returns successfully" Jun 25 16:20:51.964500 kubelet[3353]: I0625 16:20:51.963957 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6df6fb56db-fmmr7" podStartSLOduration=3.583597615 podCreationTimestamp="2024-06-25 16:20:44 +0000 UTC" firstStartedPulling="2024-06-25 16:20:46.931025523 +0000 UTC m=+20.692229283" lastFinishedPulling="2024-06-25 16:20:51.311236693 +0000 UTC m=+25.072440446" observedRunningTime="2024-06-25 16:20:51.949769048 +0000 UTC m=+25.710972819" watchObservedRunningTime="2024-06-25 16:20:51.963808778 +0000 UTC m=+25.725012534" Jun 25 16:20:51.996162 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 16:20:51.996327 kernel: audit: type=1325 audit(1719332451.992:284): table=filter:95 family=2 entries=15 op=nft_register_rule pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:51.992000 audit[4094]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:51.992000 audit[4094]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffde8a30130 a2=0 a3=7ffde8a3011c items=0 ppid=3494 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:52.000074 kernel: audit: type=1300 audit(1719332451.992:284): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffde8a30130 a2=0 a3=7ffde8a3011c items=0 ppid=3494 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:51.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:52.003564 kernel: audit: type=1327 audit(1719332451.992:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:52.003663 kernel: audit: type=1325 audit(1719332451.994:285): table=nat:96 family=2 entries=19 op=nft_register_chain pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:51.994000 audit[4094]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:51.994000 audit[4094]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffde8a30130 a2=0 a3=7ffde8a3011c items=0 ppid=3494 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:52.008764 kernel: audit: type=1300 audit(1719332451.994:285): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffde8a30130 a2=0 a3=7ffde8a3011c items=0 ppid=3494 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:52.009193 kernel: audit: type=1327 audit(1719332451.994:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:51.994000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:52.721497 kubelet[3353]: E0625 16:20:52.721404 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l82ff" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" Jun 25 16:20:54.722469 kubelet[3353]: E0625 16:20:54.721373 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l82ff" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" Jun 25 16:20:56.517025 containerd[1894]: time="2024-06-25T16:20:56.516969348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:56.518571 containerd[1894]: time="2024-06-25T16:20:56.518476539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:20:56.520580 containerd[1894]: time="2024-06-25T16:20:56.520546551Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:56.522970 containerd[1894]: time="2024-06-25T16:20:56.522936161Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:56.525773 containerd[1894]: time="2024-06-25T16:20:56.525734744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:56.526597 containerd[1894]: time="2024-06-25T16:20:56.526554195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.195617093s" Jun 25 16:20:56.526685 containerd[1894]: time="2024-06-25T16:20:56.526603924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:20:56.529117 containerd[1894]: time="2024-06-25T16:20:56.529047325Z" level=info msg="CreateContainer within sandbox \"234273783192ed26fc5bdb6be89c0a89e313b096ea8a36ea4a23a3f356481184\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:20:56.592928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346974124.mount: Deactivated successfully. Jun 25 16:20:56.603232 containerd[1894]: time="2024-06-25T16:20:56.603184981Z" level=info msg="CreateContainer within sandbox \"234273783192ed26fc5bdb6be89c0a89e313b096ea8a36ea4a23a3f356481184\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"17553f1976578732e6dc59ecbbb7e5a837cc3025b0356aa087609e05adfdb47c\"" Jun 25 16:20:56.606012 containerd[1894]: time="2024-06-25T16:20:56.603672448Z" level=info msg="StartContainer for \"17553f1976578732e6dc59ecbbb7e5a837cc3025b0356aa087609e05adfdb47c\"" Jun 25 16:20:56.684953 systemd[1]: run-containerd-runc-k8s.io-17553f1976578732e6dc59ecbbb7e5a837cc3025b0356aa087609e05adfdb47c-runc.9pjYiP.mount: Deactivated successfully. Jun 25 16:20:56.721889 kubelet[3353]: E0625 16:20:56.721486 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l82ff" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" Jun 25 16:20:56.764251 containerd[1894]: time="2024-06-25T16:20:56.764206689Z" level=info msg="StartContainer for \"17553f1976578732e6dc59ecbbb7e5a837cc3025b0356aa087609e05adfdb47c\" returns successfully" Jun 25 16:20:57.697640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17553f1976578732e6dc59ecbbb7e5a837cc3025b0356aa087609e05adfdb47c-rootfs.mount: Deactivated successfully. Jun 25 16:20:57.711990 containerd[1894]: time="2024-06-25T16:20:57.710396730Z" level=info msg="shim disconnected" id=17553f1976578732e6dc59ecbbb7e5a837cc3025b0356aa087609e05adfdb47c namespace=k8s.io Jun 25 16:20:57.711990 containerd[1894]: time="2024-06-25T16:20:57.710536424Z" level=warning msg="cleaning up after shim disconnected" id=17553f1976578732e6dc59ecbbb7e5a837cc3025b0356aa087609e05adfdb47c namespace=k8s.io Jun 25 16:20:57.711990 containerd[1894]: time="2024-06-25T16:20:57.710551699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:20:57.742949 kubelet[3353]: I0625 16:20:57.742919 3353 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:20:57.775099 kubelet[3353]: I0625 16:20:57.775049 3353 topology_manager.go:215] "Topology Admit Handler" podUID="529c05d3-5db3-465d-a964-613b90de0483" podNamespace="kube-system" podName="coredns-5dd5756b68-t67mr" Jun 25 16:20:57.780708 kubelet[3353]: I0625 16:20:57.780655 3353 topology_manager.go:215] "Topology Admit Handler" podUID="aedaeef2-cca5-497d-9e7c-89ecb9122d1f" podNamespace="kube-system" podName="coredns-5dd5756b68-fkctm" Jun 25 16:20:57.784656 kubelet[3353]: I0625 16:20:57.784620 3353 topology_manager.go:215] "Topology Admit Handler" podUID="fb951166-44c7-4dc8-b31a-6c99194a7411" podNamespace="calico-system" podName="calico-kube-controllers-7584f4db69-64v2k" Jun 25 16:20:57.952138 kubelet[3353]: I0625 16:20:57.951987 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4dx\" (UniqueName: \"kubernetes.io/projected/aedaeef2-cca5-497d-9e7c-89ecb9122d1f-kube-api-access-bm4dx\") pod \"coredns-5dd5756b68-fkctm\" (UID: \"aedaeef2-cca5-497d-9e7c-89ecb9122d1f\") " pod="kube-system/coredns-5dd5756b68-fkctm" Jun 25 16:20:57.952138 kubelet[3353]: I0625 16:20:57.952047 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzjmc\" (UniqueName: \"kubernetes.io/projected/fb951166-44c7-4dc8-b31a-6c99194a7411-kube-api-access-fzjmc\") pod \"calico-kube-controllers-7584f4db69-64v2k\" (UID: \"fb951166-44c7-4dc8-b31a-6c99194a7411\") " pod="calico-system/calico-kube-controllers-7584f4db69-64v2k" Jun 25 16:20:57.952138 kubelet[3353]: I0625 16:20:57.952101 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aedaeef2-cca5-497d-9e7c-89ecb9122d1f-config-volume\") pod \"coredns-5dd5756b68-fkctm\" (UID: \"aedaeef2-cca5-497d-9e7c-89ecb9122d1f\") " pod="kube-system/coredns-5dd5756b68-fkctm" Jun 25 16:20:57.952888 kubelet[3353]: I0625 16:20:57.952853 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb951166-44c7-4dc8-b31a-6c99194a7411-tigera-ca-bundle\") pod \"calico-kube-controllers-7584f4db69-64v2k\" (UID: \"fb951166-44c7-4dc8-b31a-6c99194a7411\") " pod="calico-system/calico-kube-controllers-7584f4db69-64v2k" Jun 25 16:20:57.953356 kubelet[3353]: I0625 16:20:57.953335 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqx5v\" (UniqueName: \"kubernetes.io/projected/529c05d3-5db3-465d-a964-613b90de0483-kube-api-access-pqx5v\") pod \"coredns-5dd5756b68-t67mr\" (UID: \"529c05d3-5db3-465d-a964-613b90de0483\") " pod="kube-system/coredns-5dd5756b68-t67mr" Jun 25 16:20:57.953472 kubelet[3353]: I0625 16:20:57.953388 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/529c05d3-5db3-465d-a964-613b90de0483-config-volume\") pod \"coredns-5dd5756b68-t67mr\" (UID: \"529c05d3-5db3-465d-a964-613b90de0483\") " pod="kube-system/coredns-5dd5756b68-t67mr" Jun 25 16:20:57.958140 containerd[1894]: time="2024-06-25T16:20:57.957436692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:20:58.108090 containerd[1894]: time="2024-06-25T16:20:58.108019067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fkctm,Uid:aedaeef2-cca5-497d-9e7c-89ecb9122d1f,Namespace:kube-system,Attempt:0,}" Jun 25 16:20:58.121454 containerd[1894]: time="2024-06-25T16:20:58.121408273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7584f4db69-64v2k,Uid:fb951166-44c7-4dc8-b31a-6c99194a7411,Namespace:calico-system,Attempt:0,}" Jun 25 16:20:58.126851 containerd[1894]: time="2024-06-25T16:20:58.126805006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-t67mr,Uid:529c05d3-5db3-465d-a964-613b90de0483,Namespace:kube-system,Attempt:0,}" Jun 25 16:20:58.434181 containerd[1894]: time="2024-06-25T16:20:58.434086530Z" level=error msg="Failed to destroy network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.434623 containerd[1894]: time="2024-06-25T16:20:58.434518842Z" level=error msg="Failed to destroy network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.435828 containerd[1894]: time="2024-06-25T16:20:58.435635842Z" level=error msg="encountered an error cleaning up failed sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.436731 containerd[1894]: time="2024-06-25T16:20:58.435664909Z" level=error msg="encountered an error cleaning up failed sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.473544 containerd[1894]: time="2024-06-25T16:20:58.473475116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-t67mr,Uid:529c05d3-5db3-465d-a964-613b90de0483,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.474243 containerd[1894]: time="2024-06-25T16:20:58.474195467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7584f4db69-64v2k,Uid:fb951166-44c7-4dc8-b31a-6c99194a7411,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.475038 containerd[1894]: time="2024-06-25T16:20:58.474642801Z" level=error msg="Failed to destroy network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.475690 containerd[1894]: time="2024-06-25T16:20:58.475382205Z" level=error msg="encountered an error cleaning up failed sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.486019 containerd[1894]: time="2024-06-25T16:20:58.475453979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fkctm,Uid:aedaeef2-cca5-497d-9e7c-89ecb9122d1f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.487971 kubelet[3353]: E0625 16:20:58.487943 3353 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.488251 kubelet[3353]: E0625 16:20:58.488027 3353 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-t67mr" Jun 25 16:20:58.488251 kubelet[3353]: E0625 16:20:58.488071 3353 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-t67mr" Jun 25 16:20:58.488372 kubelet[3353]: E0625 16:20:58.488251 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-t67mr_kube-system(529c05d3-5db3-465d-a964-613b90de0483)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-t67mr_kube-system(529c05d3-5db3-465d-a964-613b90de0483)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-t67mr" podUID="529c05d3-5db3-465d-a964-613b90de0483" Jun 25 16:20:58.491538 kubelet[3353]: E0625 16:20:58.489220 3353 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.491538 kubelet[3353]: E0625 16:20:58.489271 3353 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7584f4db69-64v2k" Jun 25 16:20:58.491538 kubelet[3353]: E0625 16:20:58.489302 3353 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7584f4db69-64v2k" Jun 25 16:20:58.491726 kubelet[3353]: E0625 16:20:58.489365 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7584f4db69-64v2k_calico-system(fb951166-44c7-4dc8-b31a-6c99194a7411)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7584f4db69-64v2k_calico-system(fb951166-44c7-4dc8-b31a-6c99194a7411)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7584f4db69-64v2k" podUID="fb951166-44c7-4dc8-b31a-6c99194a7411" Jun 25 16:20:58.491726 kubelet[3353]: E0625 16:20:58.489481 3353 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.491726 kubelet[3353]: E0625 16:20:58.489514 3353 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fkctm" Jun 25 16:20:58.491937 kubelet[3353]: E0625 16:20:58.489606 3353 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fkctm" Jun 25 16:20:58.491937 kubelet[3353]: E0625 16:20:58.489672 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-fkctm_kube-system(aedaeef2-cca5-497d-9e7c-89ecb9122d1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-fkctm_kube-system(aedaeef2-cca5-497d-9e7c-89ecb9122d1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fkctm" podUID="aedaeef2-cca5-497d-9e7c-89ecb9122d1f" Jun 25 16:20:58.697643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9-shm.mount: Deactivated successfully. Jun 25 16:20:58.726246 containerd[1894]: time="2024-06-25T16:20:58.726203674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l82ff,Uid:7047a1f0-d856-4ebc-9890-f4acf3b2eb78,Namespace:calico-system,Attempt:0,}" Jun 25 16:20:58.819215 containerd[1894]: time="2024-06-25T16:20:58.819150427Z" level=error msg="Failed to destroy network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.825918 containerd[1894]: time="2024-06-25T16:20:58.823819820Z" level=error msg="encountered an error cleaning up failed sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.825918 containerd[1894]: time="2024-06-25T16:20:58.824562475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l82ff,Uid:7047a1f0-d856-4ebc-9890-f4acf3b2eb78,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.826045 kubelet[3353]: E0625 16:20:58.825594 3353 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:58.826045 kubelet[3353]: E0625 16:20:58.825701 3353 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l82ff" Jun 25 16:20:58.826045 kubelet[3353]: E0625 16:20:58.825730 3353 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l82ff" Jun 25 16:20:58.823533 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c-shm.mount: Deactivated successfully. Jun 25 16:20:58.827852 kubelet[3353]: E0625 16:20:58.825821 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l82ff_calico-system(7047a1f0-d856-4ebc-9890-f4acf3b2eb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l82ff_calico-system(7047a1f0-d856-4ebc-9890-f4acf3b2eb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l82ff" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" Jun 25 16:20:58.960958 kubelet[3353]: I0625 16:20:58.960044 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:20:58.964393 kubelet[3353]: I0625 16:20:58.963987 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:20:58.977266 kubelet[3353]: I0625 16:20:58.977243 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:20:58.979162 kubelet[3353]: I0625 16:20:58.979142 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:20:58.994275 containerd[1894]: time="2024-06-25T16:20:58.994169847Z" level=info msg="StopPodSandbox for \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\"" Jun 25 16:20:58.994704 containerd[1894]: time="2024-06-25T16:20:58.994665975Z" level=info msg="StopPodSandbox for \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\"" Jun 25 16:20:58.999954 containerd[1894]: time="2024-06-25T16:20:58.999912867Z" level=info msg="Ensure that sandbox 3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33 in task-service has been cleanup successfully" Jun 25 16:20:59.001676 containerd[1894]: time="2024-06-25T16:20:59.001630319Z" level=info msg="Ensure that sandbox 80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca in task-service has been cleanup successfully" Jun 25 16:20:59.004032 containerd[1894]: time="2024-06-25T16:20:59.003993538Z" level=info msg="StopPodSandbox for \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\"" Jun 25 16:20:59.004841 containerd[1894]: time="2024-06-25T16:20:59.004774972Z" level=info msg="StopPodSandbox for \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\"" Jun 25 16:20:59.005224 containerd[1894]: time="2024-06-25T16:20:59.005169370Z" level=info msg="Ensure that sandbox bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9 in task-service has been cleanup successfully" Jun 25 16:20:59.005909 containerd[1894]: time="2024-06-25T16:20:59.005879702Z" level=info msg="Ensure that sandbox 422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c in task-service has been cleanup successfully" Jun 25 16:20:59.107432 containerd[1894]: time="2024-06-25T16:20:59.107368689Z" level=error msg="StopPodSandbox for \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\" failed" error="failed to destroy network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:59.108020 kubelet[3353]: E0625 16:20:59.107788 3353 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:20:59.108020 kubelet[3353]: E0625 16:20:59.107876 3353 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33"} Jun 25 16:20:59.108020 kubelet[3353]: E0625 16:20:59.107938 3353 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"529c05d3-5db3-465d-a964-613b90de0483\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:20:59.108020 kubelet[3353]: E0625 16:20:59.107985 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"529c05d3-5db3-465d-a964-613b90de0483\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-t67mr" podUID="529c05d3-5db3-465d-a964-613b90de0483" Jun 25 16:20:59.109230 containerd[1894]: time="2024-06-25T16:20:59.109185944Z" level=error msg="StopPodSandbox for \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\" failed" error="failed to destroy network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:59.109545 kubelet[3353]: E0625 16:20:59.109514 3353 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:20:59.109654 kubelet[3353]: E0625 16:20:59.109570 3353 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca"} Jun 25 16:20:59.109654 kubelet[3353]: E0625 16:20:59.109614 3353 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb951166-44c7-4dc8-b31a-6c99194a7411\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:20:59.109794 kubelet[3353]: E0625 16:20:59.109666 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb951166-44c7-4dc8-b31a-6c99194a7411\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7584f4db69-64v2k" podUID="fb951166-44c7-4dc8-b31a-6c99194a7411" Jun 25 16:20:59.120886 containerd[1894]: time="2024-06-25T16:20:59.120806775Z" level=error msg="StopPodSandbox for \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\" failed" error="failed to destroy network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:59.121184 kubelet[3353]: E0625 16:20:59.121155 3353 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:20:59.121349 kubelet[3353]: E0625 16:20:59.121273 3353 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9"} Jun 25 16:20:59.121349 kubelet[3353]: E0625 16:20:59.121325 3353 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aedaeef2-cca5-497d-9e7c-89ecb9122d1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:20:59.121500 kubelet[3353]: E0625 16:20:59.121367 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aedaeef2-cca5-497d-9e7c-89ecb9122d1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fkctm" podUID="aedaeef2-cca5-497d-9e7c-89ecb9122d1f" Jun 25 16:20:59.126668 containerd[1894]: time="2024-06-25T16:20:59.126598426Z" level=error msg="StopPodSandbox for \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\" failed" error="failed to destroy network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:59.127037 kubelet[3353]: E0625 16:20:59.126959 3353 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:20:59.127168 kubelet[3353]: E0625 16:20:59.127042 3353 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c"} Jun 25 16:20:59.127168 kubelet[3353]: E0625 16:20:59.127121 3353 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7047a1f0-d856-4ebc-9890-f4acf3b2eb78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:20:59.127168 kubelet[3353]: E0625 16:20:59.127161 3353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7047a1f0-d856-4ebc-9890-f4acf3b2eb78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l82ff" podUID="7047a1f0-d856-4ebc-9890-f4acf3b2eb78" Jun 25 16:21:06.266175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335892015.mount: Deactivated successfully. Jun 25 16:21:06.353531 containerd[1894]: time="2024-06-25T16:21:06.353473638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:06.356604 containerd[1894]: time="2024-06-25T16:21:06.356096650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:21:06.358251 containerd[1894]: time="2024-06-25T16:21:06.358205357Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:06.363178 containerd[1894]: time="2024-06-25T16:21:06.363138945Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:06.365786 containerd[1894]: time="2024-06-25T16:21:06.365743837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:06.367411 containerd[1894]: time="2024-06-25T16:21:06.367366622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.409882871s" Jun 25 16:21:06.367587 containerd[1894]: time="2024-06-25T16:21:06.367561396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:21:06.426115 containerd[1894]: time="2024-06-25T16:21:06.426011450Z" level=info msg="CreateContainer within sandbox \"234273783192ed26fc5bdb6be89c0a89e313b096ea8a36ea4a23a3f356481184\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:21:06.482939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324751311.mount: Deactivated successfully. Jun 25 16:21:06.518476 containerd[1894]: time="2024-06-25T16:21:06.518354560Z" level=info msg="CreateContainer within sandbox \"234273783192ed26fc5bdb6be89c0a89e313b096ea8a36ea4a23a3f356481184\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fbfff568badc418d60e33f210d56b9cd9cc2b1b00493da21232f3dfd4671cc83\"" Jun 25 16:21:06.534836 containerd[1894]: time="2024-06-25T16:21:06.533955985Z" level=info msg="StartContainer for \"fbfff568badc418d60e33f210d56b9cd9cc2b1b00493da21232f3dfd4671cc83\"" Jun 25 16:21:06.835576 containerd[1894]: time="2024-06-25T16:21:06.835439069Z" level=info msg="StartContainer for \"fbfff568badc418d60e33f210d56b9cd9cc2b1b00493da21232f3dfd4671cc83\" returns successfully" Jun 25 16:21:06.991247 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:21:06.991477 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:21:07.448317 systemd[1]: run-containerd-runc-k8s.io-fbfff568badc418d60e33f210d56b9cd9cc2b1b00493da21232f3dfd4671cc83-runc.qThpsE.mount: Deactivated successfully. Jun 25 16:21:09.162973 systemd[1]: run-containerd-runc-k8s.io-fbfff568badc418d60e33f210d56b9cd9cc2b1b00493da21232f3dfd4671cc83-runc.YD2uD4.mount: Deactivated successfully. Jun 25 16:21:09.167000 audit[4571]: AVC avc: denied { write } for pid=4571 comm="tee" name="fd" dev="proc" ino=26902 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:21:09.174412 kernel: audit: type=1400 audit(1719332469.167:286): avc: denied { write } for pid=4571 comm="tee" name="fd" dev="proc" ino=26902 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:21:09.174576 kernel: audit: type=1300 audit(1719332469.167:286): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc989d0a22 a2=241 a3=1b6 items=1 ppid=4526 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:09.167000 audit[4571]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc989d0a22 a2=241 a3=1b6 items=1 ppid=4526 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:09.167000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:21:09.177092 kernel: audit: type=1307 audit(1719332469.167:286): cwd="/etc/service/enabled/felix/log" Jun 25 16:21:09.167000 audit: PATH item=0 name="/dev/fd/63" inode=26852 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:21:09.180109 kernel: audit: type=1302 audit(1719332469.167:286): item=0 name="/dev/fd/63" inode=26852 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:21:09.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:21:09.189219 kernel: audit: type=1327 audit(1719332469.167:286): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:21:09.207831 kernel: audit: type=1400 audit(1719332469.199:287): avc: denied { write } for pid=4574 comm="tee" name="fd" dev="proc" ino=26544 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:21:09.209081 kernel: audit: type=1300 audit(1719332469.199:287): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb23e9a22 a2=241 a3=1b6 items=1 ppid=4524 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:09.199000 audit[4574]: AVC avc: denied { write } for pid=4574 comm="tee" name="fd" dev="proc" ino=26544 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:21:09.199000 audit[4574]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb23e9a22 a2=241 a3=1b6 items=1 ppid=4524 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:09.199000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:21:09.211087 kernel: audit: type=1307 audit(1719332469.199:287): cwd="/etc/service/enabled/confd/log" Jun 25 16:21:09.199000 audit: PATH item=0 name="/dev/fd/63" inode=26853 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:21:09.214078 kernel: audit: type=1302 audit(1719332469.199:287): item=0 name="/dev/fd/63" inode=26853 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:21:09.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:21:09.218143 kernel: audit: type=1327 audit(1719332469.199:287): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:21:09.213000 audit[4587]: AVC avc: denied { write } for pid=4587 comm="tee" name="fd" dev="proc" ino=26550 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:21:09.213000 audit[4587]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcc119ca24 a2=241 a3=1b6 items=1 ppid=4521 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:09.213000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:21:09.213000 audit: PATH item=0 name="/dev/fd/63" inode=26888 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:21:09.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:21:09.214000 audit[4600]: AVC avc: denied { write } for pid=4600 comm="tee" name="fd" dev="proc" ino=26926 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:21:09.214000 audit[4600]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd20eea22 a2=241 a3=1b6 items=1 ppid=4533 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:09.214000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:21:09.214000 audit: PATH item=0 name="/dev/fd/63" inode=26904 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:21:09.214000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:21:09.235000 audit[4606]: AVC avc: denied { write } for pid=4606 comm="tee" name="fd" dev="proc" ino=26558 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:21:09.235000 audit[4606]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc94cdda13 a2=241 a3=1b6 items=1 ppid=4529 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:09.235000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:21:09.235000 audit: PATH item=0 name="/dev/fd/63" inode=26922 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:21:09.235000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:21:09.238000 audit[4611]: AVC avc: denied { write } for pid=4611 comm="tee" name="fd" dev="proc" ino=26562 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:21:09.238000 audit[4611]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe518d2a23 a2=241 a3=1b6 items=1 ppid=4527 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:09.238000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:21:09.238000 audit: PATH item=0 name="/dev/fd/63" inode=26554 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:21:09.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:21:09.253000 audit[4613]: AVC avc: denied { write } for pid=4613 comm="tee" name="fd" dev="proc" ino=26566 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:21:09.253000 audit[4613]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffdb04aa12 a2=241 a3=1b6 items=1 ppid=4536 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:09.253000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:21:09.253000 audit: PATH item=0 name="/dev/fd/63" inode=26557 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:21:09.253000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:21:09.736247 containerd[1894]: time="2024-06-25T16:21:09.736183162Z" level=info msg="StopPodSandbox for \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\"" Jun 25 16:21:09.967344 kubelet[3353]: I0625 16:21:09.966006 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-t8jhg" podStartSLOduration=6.461644665 podCreationTimestamp="2024-06-25 16:20:44 +0000 UTC" firstStartedPulling="2024-06-25 16:20:46.919613462 +0000 UTC m=+20.680817223" lastFinishedPulling="2024-06-25 16:21:06.3679063 +0000 UTC m=+40.129110055" observedRunningTime="2024-06-25 16:21:07.142305544 +0000 UTC m=+40.903509318" watchObservedRunningTime="2024-06-25 16:21:09.909937497 +0000 UTC m=+43.671141269" Jun 25 16:21:10.149824 systemd-networkd[1581]: vxlan.calico: Link UP Jun 25 16:21:10.149833 systemd-networkd[1581]: vxlan.calico: Gained carrier Jun 25 16:21:10.151139 (udev-worker)[4710]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:21:10.168000 audit: BPF prog-id=10 op=LOAD Jun 25 16:21:10.168000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2ed35860 a2=70 a3=7f3d0c88c000 items=0 ppid=4531 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:10.168000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:21:10.168000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:21:10.168000 audit: BPF prog-id=11 op=LOAD Jun 25 16:21:10.168000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2ed35860 a2=70 a3=6f items=0 ppid=4531 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:10.168000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:21:10.168000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:21:10.168000 audit: BPF prog-id=12 op=LOAD Jun 25 16:21:10.168000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc2ed357f0 a2=70 a3=7ffc2ed35860 items=0 ppid=4531 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:10.168000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:21:10.168000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:21:10.170000 audit: BPF prog-id=13 op=LOAD Jun 25 16:21:10.170000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc2ed35820 a2=70 a3=0 items=0 ppid=4531 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:10.170000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:21:10.177558 (udev-worker)[4711]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:21:10.210323 (udev-worker)[4715]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:21:10.305000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:09.913 [INFO][4661] k8s.go 608: Cleaning up netns ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:09.914 [INFO][4661] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" iface="eth0" netns="/var/run/netns/cni-bf3a4aed-cce1-44b9-3490-b1ae85155bc7" Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:09.914 [INFO][4661] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" iface="eth0" netns="/var/run/netns/cni-bf3a4aed-cce1-44b9-3490-b1ae85155bc7" Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:09.921 [INFO][4661] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" iface="eth0" netns="/var/run/netns/cni-bf3a4aed-cce1-44b9-3490-b1ae85155bc7" Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:09.921 [INFO][4661] k8s.go 615: Releasing IP address(es) ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:09.921 [INFO][4661] utils.go 188: Calico CNI releasing IP address ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:10.337 [INFO][4686] ipam_plugin.go 411: Releasing address using handleID ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" HandleID="k8s-pod-network.422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:10.342 [INFO][4686] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:10.342 [INFO][4686] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:10.389 [WARNING][4686] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" HandleID="k8s-pod-network.422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:10.389 [INFO][4686] ipam_plugin.go 439: Releasing address using workloadID ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" HandleID="k8s-pod-network.422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:10.392 [INFO][4686] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:10.404171 containerd[1894]: 2024-06-25 16:21:10.396 [INFO][4661] k8s.go 621: Teardown processing complete. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:10.409584 systemd[1]: run-netns-cni\x2dbf3a4aed\x2dcce1\x2d44b9\x2d3490\x2db1ae85155bc7.mount: Deactivated successfully. Jun 25 16:21:10.411205 containerd[1894]: time="2024-06-25T16:21:10.411160435Z" level=info msg="TearDown network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\" successfully" Jun 25 16:21:10.411666 containerd[1894]: time="2024-06-25T16:21:10.411614648Z" level=info msg="StopPodSandbox for \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\" returns successfully" Jun 25 16:21:10.465864 containerd[1894]: time="2024-06-25T16:21:10.465812110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l82ff,Uid:7047a1f0-d856-4ebc-9890-f4acf3b2eb78,Namespace:calico-system,Attempt:1,}" Jun 25 16:21:10.559000 audit[4745]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4745 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:10.559000 audit[4745]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff36914c40 a2=0 a3=7fff36914c2c items=0 ppid=4531 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:10.559000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:10.563000 audit[4743]: NETFILTER_CFG table=raw:98 family=2 entries=19 op=nft_register_chain pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:10.563000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7fff4e6847d0 a2=0 a3=7fff4e6847bc items=0 ppid=4531 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:10.563000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:10.565000 audit[4744]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=4744 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:10.565000 audit[4744]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd03f15b10 a2=0 a3=7ffd03f15afc items=0 ppid=4531 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:10.565000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:10.569000 audit[4746]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=4746 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:10.569000 audit[4746]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffdca1a17f0 a2=0 a3=7ffdca1a17dc items=0 ppid=4531 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:10.569000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:10.828450 systemd-networkd[1581]: cali19077955ae1: Link UP Jun 25 16:21:10.833200 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali19077955ae1: link becomes ready Jun 25 16:21:10.832728 systemd-networkd[1581]: cali19077955ae1: Gained carrier Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.689 [INFO][4752] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0 csi-node-driver- calico-system 7047a1f0-d856-4ebc-9890-f4acf3b2eb78 689 0 2024-06-25 16:20:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-30-52 csi-node-driver-l82ff eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali19077955ae1 [] []}} ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Namespace="calico-system" Pod="csi-node-driver-l82ff" WorkloadEndpoint="ip--172--31--30--52-k8s-csi--node--driver--l82ff-" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.689 [INFO][4752] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Namespace="calico-system" Pod="csi-node-driver-l82ff" WorkloadEndpoint="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.754 [INFO][4763] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" HandleID="k8s-pod-network.69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.764 [INFO][4763] ipam_plugin.go 264: Auto assigning IP ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" HandleID="k8s-pod-network.69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddd40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-52", "pod":"csi-node-driver-l82ff", "timestamp":"2024-06-25 16:21:10.754160167 +0000 UTC"}, Hostname:"ip-172-31-30-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.765 [INFO][4763] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.765 [INFO][4763] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.765 [INFO][4763] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-52' Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.767 [INFO][4763] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" host="ip-172-31-30-52" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.782 [INFO][4763] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-52" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.792 [INFO][4763] ipam.go 489: Trying affinity for 192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.796 [INFO][4763] ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.802 [INFO][4763] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.802 [INFO][4763] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" host="ip-172-31-30-52" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.804 [INFO][4763] ipam.go 1685: Creating new handle: k8s-pod-network.69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.810 [INFO][4763] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" host="ip-172-31-30-52" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.820 [INFO][4763] ipam.go 1216: Successfully claimed IPs: [192.168.50.1/26] block=192.168.50.0/26 handle="k8s-pod-network.69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" host="ip-172-31-30-52" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.821 [INFO][4763] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.1/26] handle="k8s-pod-network.69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" host="ip-172-31-30-52" Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.821 [INFO][4763] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:10.886371 containerd[1894]: 2024-06-25 16:21:10.821 [INFO][4763] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.50.1/26] IPv6=[] ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" HandleID="k8s-pod-network.69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.888258 containerd[1894]: 2024-06-25 16:21:10.825 [INFO][4752] k8s.go 386: Populated endpoint ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Namespace="calico-system" Pod="csi-node-driver-l82ff" WorkloadEndpoint="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7047a1f0-d856-4ebc-9890-f4acf3b2eb78", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"", Pod:"csi-node-driver-l82ff", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali19077955ae1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:10.888258 containerd[1894]: 2024-06-25 16:21:10.825 [INFO][4752] k8s.go 387: Calico CNI using IPs: [192.168.50.1/32] ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Namespace="calico-system" Pod="csi-node-driver-l82ff" WorkloadEndpoint="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.888258 containerd[1894]: 2024-06-25 16:21:10.825 [INFO][4752] dataplane_linux.go 68: Setting the host side veth name to cali19077955ae1 ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Namespace="calico-system" Pod="csi-node-driver-l82ff" WorkloadEndpoint="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.888258 containerd[1894]: 2024-06-25 16:21:10.852 [INFO][4752] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Namespace="calico-system" Pod="csi-node-driver-l82ff" WorkloadEndpoint="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.888258 containerd[1894]: 2024-06-25 16:21:10.854 [INFO][4752] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Namespace="calico-system" Pod="csi-node-driver-l82ff" WorkloadEndpoint="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7047a1f0-d856-4ebc-9890-f4acf3b2eb78", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc", Pod:"csi-node-driver-l82ff", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali19077955ae1", MAC:"f6:96:bd:2d:b0:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:10.888258 containerd[1894]: 2024-06-25 16:21:10.870 [INFO][4752] k8s.go 500: Wrote updated endpoint to datastore ContainerID="69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc" Namespace="calico-system" Pod="csi-node-driver-l82ff" WorkloadEndpoint="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:10.933000 audit[4783]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=4783 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:10.933000 audit[4783]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffda32ffef0 a2=0 a3=7ffda32ffedc items=0 ppid=4531 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:10.933000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:10.966340 containerd[1894]: time="2024-06-25T16:21:10.966127026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:21:10.966340 containerd[1894]: time="2024-06-25T16:21:10.966214213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:10.966340 containerd[1894]: time="2024-06-25T16:21:10.966246193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:21:10.966340 containerd[1894]: time="2024-06-25T16:21:10.966267698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:11.065174 containerd[1894]: time="2024-06-25T16:21:11.065109158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l82ff,Uid:7047a1f0-d856-4ebc-9890-f4acf3b2eb78,Namespace:calico-system,Attempt:1,} returns sandbox id \"69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc\"" Jun 25 16:21:11.067424 containerd[1894]: time="2024-06-25T16:21:11.067388093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:21:11.399177 systemd-networkd[1581]: vxlan.calico: Gained IPv6LL Jun 25 16:21:11.409461 systemd[1]: run-containerd-runc-k8s.io-69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc-runc.cUCt9D.mount: Deactivated successfully. Jun 25 16:21:11.722337 containerd[1894]: time="2024-06-25T16:21:11.720512004Z" level=info msg="StopPodSandbox for \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\"" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.780 [INFO][4841] k8s.go 608: Cleaning up netns ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.780 [INFO][4841] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" iface="eth0" netns="/var/run/netns/cni-a130028b-ee03-588f-61cb-74374cef2d22" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.780 [INFO][4841] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" iface="eth0" netns="/var/run/netns/cni-a130028b-ee03-588f-61cb-74374cef2d22" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.781 [INFO][4841] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" iface="eth0" netns="/var/run/netns/cni-a130028b-ee03-588f-61cb-74374cef2d22" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.781 [INFO][4841] k8s.go 615: Releasing IP address(es) ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.781 [INFO][4841] utils.go 188: Calico CNI releasing IP address ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.806 [INFO][4847] ipam_plugin.go 411: Releasing address using handleID ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" HandleID="k8s-pod-network.80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.806 [INFO][4847] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.806 [INFO][4847] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.818 [WARNING][4847] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" HandleID="k8s-pod-network.80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.818 [INFO][4847] ipam_plugin.go 439: Releasing address using workloadID ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" HandleID="k8s-pod-network.80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.820 [INFO][4847] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:11.826140 containerd[1894]: 2024-06-25 16:21:11.822 [INFO][4841] k8s.go 621: Teardown processing complete. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:11.833199 containerd[1894]: time="2024-06-25T16:21:11.830172286Z" level=info msg="TearDown network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\" successfully" Jun 25 16:21:11.833199 containerd[1894]: time="2024-06-25T16:21:11.830240241Z" level=info msg="StopPodSandbox for \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\" returns successfully" Jun 25 16:21:11.831722 systemd[1]: run-netns-cni\x2da130028b\x2dee03\x2d588f\x2d61cb\x2d74374cef2d22.mount: Deactivated successfully. Jun 25 16:21:11.833728 containerd[1894]: time="2024-06-25T16:21:11.833685931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7584f4db69-64v2k,Uid:fb951166-44c7-4dc8-b31a-6c99194a7411,Namespace:calico-system,Attempt:1,}" Jun 25 16:21:12.050334 systemd-networkd[1581]: cali5f4045df68b: Link UP Jun 25 16:21:12.055832 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:21:12.056012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5f4045df68b: link becomes ready Jun 25 16:21:12.056249 systemd-networkd[1581]: cali5f4045df68b: Gained carrier Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:11.941 [INFO][4853] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0 calico-kube-controllers-7584f4db69- calico-system fb951166-44c7-4dc8-b31a-6c99194a7411 698 0 2024-06-25 16:20:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7584f4db69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-52 calico-kube-controllers-7584f4db69-64v2k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5f4045df68b [] []}} ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Namespace="calico-system" Pod="calico-kube-controllers-7584f4db69-64v2k" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:11.942 [INFO][4853] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Namespace="calico-system" Pod="calico-kube-controllers-7584f4db69-64v2k" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:11.984 [INFO][4865] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" HandleID="k8s-pod-network.8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:11.995 [INFO][4865] ipam_plugin.go 264: Auto assigning IP ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" HandleID="k8s-pod-network.8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efde0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-52", "pod":"calico-kube-controllers-7584f4db69-64v2k", "timestamp":"2024-06-25 16:21:11.984858687 +0000 UTC"}, Hostname:"ip-172-31-30-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:11.995 [INFO][4865] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:11.995 [INFO][4865] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:11.995 [INFO][4865] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-52' Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:11.997 [INFO][4865] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" host="ip-172-31-30-52" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.002 [INFO][4865] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-52" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.008 [INFO][4865] ipam.go 489: Trying affinity for 192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.010 [INFO][4865] ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.013 [INFO][4865] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.013 [INFO][4865] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" host="ip-172-31-30-52" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.015 [INFO][4865] ipam.go 1685: Creating new handle: k8s-pod-network.8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6 Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.028 [INFO][4865] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" host="ip-172-31-30-52" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.035 [INFO][4865] ipam.go 1216: Successfully claimed IPs: [192.168.50.2/26] block=192.168.50.0/26 handle="k8s-pod-network.8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" host="ip-172-31-30-52" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.036 [INFO][4865] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.2/26] handle="k8s-pod-network.8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" host="ip-172-31-30-52" Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.036 [INFO][4865] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:12.075454 containerd[1894]: 2024-06-25 16:21:12.036 [INFO][4865] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.50.2/26] IPv6=[] ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" HandleID="k8s-pod-network.8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:12.077020 containerd[1894]: 2024-06-25 16:21:12.038 [INFO][4853] k8s.go 386: Populated endpoint ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Namespace="calico-system" Pod="calico-kube-controllers-7584f4db69-64v2k" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0", GenerateName:"calico-kube-controllers-7584f4db69-", Namespace:"calico-system", SelfLink:"", UID:"fb951166-44c7-4dc8-b31a-6c99194a7411", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7584f4db69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"", Pod:"calico-kube-controllers-7584f4db69-64v2k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f4045df68b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:12.077020 containerd[1894]: 2024-06-25 16:21:12.038 [INFO][4853] k8s.go 387: Calico CNI using IPs: [192.168.50.2/32] ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Namespace="calico-system" Pod="calico-kube-controllers-7584f4db69-64v2k" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:12.077020 containerd[1894]: 2024-06-25 16:21:12.039 [INFO][4853] dataplane_linux.go 68: Setting the host side veth name to cali5f4045df68b ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Namespace="calico-system" Pod="calico-kube-controllers-7584f4db69-64v2k" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:12.077020 containerd[1894]: 2024-06-25 16:21:12.057 [INFO][4853] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Namespace="calico-system" Pod="calico-kube-controllers-7584f4db69-64v2k" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:12.077020 containerd[1894]: 2024-06-25 16:21:12.057 [INFO][4853] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Namespace="calico-system" Pod="calico-kube-controllers-7584f4db69-64v2k" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0", GenerateName:"calico-kube-controllers-7584f4db69-", Namespace:"calico-system", SelfLink:"", UID:"fb951166-44c7-4dc8-b31a-6c99194a7411", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7584f4db69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6", Pod:"calico-kube-controllers-7584f4db69-64v2k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f4045df68b", MAC:"2e:20:bc:90:1d:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:12.077020 containerd[1894]: 2024-06-25 16:21:12.071 [INFO][4853] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6" Namespace="calico-system" Pod="calico-kube-controllers-7584f4db69-64v2k" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:12.101000 audit[4883]: NETFILTER_CFG table=filter:102 family=2 entries=34 op=nft_register_chain pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:12.101000 audit[4883]: SYSCALL arch=c000003e syscall=46 success=yes exit=18640 a0=3 a1=7fff0f829130 a2=0 a3=7fff0f82911c items=0 ppid=4531 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:12.101000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:12.127597 containerd[1894]: time="2024-06-25T16:21:12.127457554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:21:12.127597 containerd[1894]: time="2024-06-25T16:21:12.127553780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:12.127922 containerd[1894]: time="2024-06-25T16:21:12.127584959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:21:12.127922 containerd[1894]: time="2024-06-25T16:21:12.127602760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:12.226082 containerd[1894]: time="2024-06-25T16:21:12.226025380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7584f4db69-64v2k,Uid:fb951166-44c7-4dc8-b31a-6c99194a7411,Namespace:calico-system,Attempt:1,} returns sandbox id \"8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6\"" Jun 25 16:21:12.861137 systemd-networkd[1581]: cali19077955ae1: Gained IPv6LL Jun 25 16:21:13.059321 containerd[1894]: time="2024-06-25T16:21:13.059275110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:13.060682 containerd[1894]: time="2024-06-25T16:21:13.060624448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:21:13.062968 containerd[1894]: time="2024-06-25T16:21:13.062895049Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:13.065390 containerd[1894]: time="2024-06-25T16:21:13.065359013Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:13.068778 containerd[1894]: time="2024-06-25T16:21:13.068744019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:13.069561 containerd[1894]: time="2024-06-25T16:21:13.069520792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.001896106s" Jun 25 16:21:13.069671 containerd[1894]: time="2024-06-25T16:21:13.069568900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:21:13.070798 containerd[1894]: time="2024-06-25T16:21:13.070773947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:21:13.087280 containerd[1894]: time="2024-06-25T16:21:13.086990758Z" level=info msg="CreateContainer within sandbox \"69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:21:13.119863 containerd[1894]: time="2024-06-25T16:21:13.119131957Z" level=info msg="CreateContainer within sandbox \"69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"882d5ca89d9a809c8476bc83e55d56f24f196a38dea82b20c7dbb8c44d0c69e3\"" Jun 25 16:21:13.120511 containerd[1894]: time="2024-06-25T16:21:13.120463655Z" level=info msg="StartContainer for \"882d5ca89d9a809c8476bc83e55d56f24f196a38dea82b20c7dbb8c44d0c69e3\"" Jun 25 16:21:13.209644 containerd[1894]: time="2024-06-25T16:21:13.209606373Z" level=info msg="StartContainer for \"882d5ca89d9a809c8476bc83e55d56f24f196a38dea82b20c7dbb8c44d0c69e3\" returns successfully" Jun 25 16:21:13.720405 containerd[1894]: time="2024-06-25T16:21:13.720349147Z" level=info msg="StopPodSandbox for \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\"" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.775 [INFO][4980] k8s.go 608: Cleaning up netns ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.776 [INFO][4980] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" iface="eth0" netns="/var/run/netns/cni-78716a8e-0a3d-8cd3-69f5-3c85a919ef69" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.776 [INFO][4980] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" iface="eth0" netns="/var/run/netns/cni-78716a8e-0a3d-8cd3-69f5-3c85a919ef69" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.776 [INFO][4980] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" iface="eth0" netns="/var/run/netns/cni-78716a8e-0a3d-8cd3-69f5-3c85a919ef69" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.776 [INFO][4980] k8s.go 615: Releasing IP address(es) ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.776 [INFO][4980] utils.go 188: Calico CNI releasing IP address ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.853 [INFO][4986] ipam_plugin.go 411: Releasing address using handleID ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" HandleID="k8s-pod-network.3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.854 [INFO][4986] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.854 [INFO][4986] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.870 [WARNING][4986] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" HandleID="k8s-pod-network.3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.871 [INFO][4986] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" HandleID="k8s-pod-network.3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.873 [INFO][4986] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:13.877324 containerd[1894]: 2024-06-25 16:21:13.874 [INFO][4980] k8s.go 621: Teardown processing complete. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:13.882926 systemd[1]: run-netns-cni\x2d78716a8e\x2d0a3d\x2d8cd3\x2d69f5\x2d3c85a919ef69.mount: Deactivated successfully. Jun 25 16:21:13.886552 containerd[1894]: time="2024-06-25T16:21:13.886503718Z" level=info msg="TearDown network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\" successfully" Jun 25 16:21:13.886552 containerd[1894]: time="2024-06-25T16:21:13.886548466Z" level=info msg="StopPodSandbox for \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\" returns successfully" Jun 25 16:21:13.887623 containerd[1894]: time="2024-06-25T16:21:13.887557254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-t67mr,Uid:529c05d3-5db3-465d-a964-613b90de0483,Namespace:kube-system,Attempt:1,}" Jun 25 16:21:14.076743 systemd-networkd[1581]: cali5f4045df68b: Gained IPv6LL Jun 25 16:21:14.310214 systemd-networkd[1581]: cali7ae64f11034: Link UP Jun 25 16:21:14.313173 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:21:14.313440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7ae64f11034: link becomes ready Jun 25 16:21:14.313493 systemd-networkd[1581]: cali7ae64f11034: Gained carrier Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:13.994 [INFO][4992] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0 coredns-5dd5756b68- kube-system 529c05d3-5db3-465d-a964-613b90de0483 712 0 2024-06-25 16:20:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-52 coredns-5dd5756b68-t67mr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ae64f11034 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Namespace="kube-system" Pod="coredns-5dd5756b68-t67mr" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:13.994 [INFO][4992] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Namespace="kube-system" Pod="coredns-5dd5756b68-t67mr" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.067 [INFO][5004] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" HandleID="k8s-pod-network.ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.259 [INFO][5004] ipam_plugin.go 264: Auto assigning IP ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" HandleID="k8s-pod-network.ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001003f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-52", "pod":"coredns-5dd5756b68-t67mr", "timestamp":"2024-06-25 16:21:14.067545552 +0000 UTC"}, Hostname:"ip-172-31-30-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.259 [INFO][5004] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.260 [INFO][5004] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.260 [INFO][5004] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-52' Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.263 [INFO][5004] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" host="ip-172-31-30-52" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.269 [INFO][5004] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-52" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.275 [INFO][5004] ipam.go 489: Trying affinity for 192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.278 [INFO][5004] ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.282 [INFO][5004] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.282 [INFO][5004] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" host="ip-172-31-30-52" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.285 [INFO][5004] ipam.go 1685: Creating new handle: k8s-pod-network.ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291 Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.291 [INFO][5004] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" host="ip-172-31-30-52" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.301 [INFO][5004] ipam.go 1216: Successfully claimed IPs: [192.168.50.3/26] block=192.168.50.0/26 handle="k8s-pod-network.ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" host="ip-172-31-30-52" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.301 [INFO][5004] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.3/26] handle="k8s-pod-network.ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" host="ip-172-31-30-52" Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.301 [INFO][5004] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:14.345843 containerd[1894]: 2024-06-25 16:21:14.301 [INFO][5004] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.50.3/26] IPv6=[] ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" HandleID="k8s-pod-network.ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:14.348884 containerd[1894]: 2024-06-25 16:21:14.304 [INFO][4992] k8s.go 386: Populated endpoint ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Namespace="kube-system" Pod="coredns-5dd5756b68-t67mr" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"529c05d3-5db3-465d-a964-613b90de0483", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"", Pod:"coredns-5dd5756b68-t67mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ae64f11034", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:14.348884 containerd[1894]: 2024-06-25 16:21:14.304 [INFO][4992] k8s.go 387: Calico CNI using IPs: [192.168.50.3/32] ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Namespace="kube-system" Pod="coredns-5dd5756b68-t67mr" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:14.348884 containerd[1894]: 2024-06-25 16:21:14.304 [INFO][4992] dataplane_linux.go 68: Setting the host side veth name to cali7ae64f11034 ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Namespace="kube-system" Pod="coredns-5dd5756b68-t67mr" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:14.348884 containerd[1894]: 2024-06-25 16:21:14.311 [INFO][4992] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Namespace="kube-system" Pod="coredns-5dd5756b68-t67mr" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:14.348884 containerd[1894]: 2024-06-25 16:21:14.314 [INFO][4992] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Namespace="kube-system" Pod="coredns-5dd5756b68-t67mr" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"529c05d3-5db3-465d-a964-613b90de0483", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291", Pod:"coredns-5dd5756b68-t67mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ae64f11034", MAC:"5a:cd:7f:da:7a:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:14.348884 containerd[1894]: 2024-06-25 16:21:14.334 [INFO][4992] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291" Namespace="kube-system" Pod="coredns-5dd5756b68-t67mr" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:14.379000 audit[5023]: NETFILTER_CFG table=filter:103 family=2 entries=42 op=nft_register_chain pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:14.383777 kernel: kauditd_printk_skb: 59 callbacks suppressed Jun 25 16:21:14.383872 kernel: audit: type=1325 audit(1719332474.379:307): table=filter:103 family=2 entries=42 op=nft_register_chain pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:14.379000 audit[5023]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7ffca4a23c40 a2=0 a3=7ffca4a23c2c items=0 ppid=4531 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:14.388012 kernel: audit: type=1300 audit(1719332474.379:307): arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7ffca4a23c40 a2=0 a3=7ffca4a23c2c items=0 ppid=4531 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:14.379000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:14.396600 kernel: audit: type=1327 audit(1719332474.379:307): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:14.412777 containerd[1894]: time="2024-06-25T16:21:14.408006948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:21:14.412777 containerd[1894]: time="2024-06-25T16:21:14.408160867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:14.412777 containerd[1894]: time="2024-06-25T16:21:14.408196396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:21:14.412777 containerd[1894]: time="2024-06-25T16:21:14.408221593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:14.557949 containerd[1894]: time="2024-06-25T16:21:14.557910840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-t67mr,Uid:529c05d3-5db3-465d-a964-613b90de0483,Namespace:kube-system,Attempt:1,} returns sandbox id \"ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291\"" Jun 25 16:21:14.565194 containerd[1894]: time="2024-06-25T16:21:14.565146351Z" level=info msg="CreateContainer within sandbox \"ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:21:14.631224 containerd[1894]: time="2024-06-25T16:21:14.631107688Z" level=info msg="CreateContainer within sandbox \"ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b66d54eb5d035ea15af88e3793ff7cfac68db202e710bd22bf4ea4a0bcff4505\"" Jun 25 16:21:14.634566 containerd[1894]: time="2024-06-25T16:21:14.633380043Z" level=info msg="StartContainer for \"b66d54eb5d035ea15af88e3793ff7cfac68db202e710bd22bf4ea4a0bcff4505\"" Jun 25 16:21:14.724403 containerd[1894]: time="2024-06-25T16:21:14.724352935Z" level=info msg="StartContainer for \"b66d54eb5d035ea15af88e3793ff7cfac68db202e710bd22bf4ea4a0bcff4505\" returns successfully" Jun 25 16:21:14.725231 containerd[1894]: time="2024-06-25T16:21:14.725112317Z" level=info msg="StopPodSandbox for \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\"" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.804 [INFO][5116] k8s.go 608: Cleaning up netns ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.805 [INFO][5116] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" iface="eth0" netns="/var/run/netns/cni-c5fb23b0-cc80-ce26-7807-6c260e2c396a" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.805 [INFO][5116] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" iface="eth0" netns="/var/run/netns/cni-c5fb23b0-cc80-ce26-7807-6c260e2c396a" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.805 [INFO][5116] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" iface="eth0" netns="/var/run/netns/cni-c5fb23b0-cc80-ce26-7807-6c260e2c396a" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.805 [INFO][5116] k8s.go 615: Releasing IP address(es) ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.805 [INFO][5116] utils.go 188: Calico CNI releasing IP address ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.852 [INFO][5126] ipam_plugin.go 411: Releasing address using handleID ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" HandleID="k8s-pod-network.bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.853 [INFO][5126] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.853 [INFO][5126] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.863 [WARNING][5126] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" HandleID="k8s-pod-network.bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.864 [INFO][5126] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" HandleID="k8s-pod-network.bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.866 [INFO][5126] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:14.872224 containerd[1894]: 2024-06-25 16:21:14.869 [INFO][5116] k8s.go 621: Teardown processing complete. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:14.873238 containerd[1894]: time="2024-06-25T16:21:14.872408714Z" level=info msg="TearDown network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\" successfully" Jun 25 16:21:14.873238 containerd[1894]: time="2024-06-25T16:21:14.872449895Z" level=info msg="StopPodSandbox for \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\" returns successfully" Jun 25 16:21:14.873775 containerd[1894]: time="2024-06-25T16:21:14.873736565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fkctm,Uid:aedaeef2-cca5-497d-9e7c-89ecb9122d1f,Namespace:kube-system,Attempt:1,}" Jun 25 16:21:14.885564 systemd[1]: run-containerd-runc-k8s.io-ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291-runc.eqPcZi.mount: Deactivated successfully. Jun 25 16:21:14.885765 systemd[1]: run-netns-cni\x2dc5fb23b0\x2dcc80\x2dce26\x2d7807\x2d6c260e2c396a.mount: Deactivated successfully. Jun 25 16:21:15.376000 audit[5156]: NETFILTER_CFG table=filter:104 family=2 entries=14 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:15.376000 audit[5156]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe41dcbf10 a2=0 a3=7ffe41dcbefc items=0 ppid=3494 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:15.393809 kernel: audit: type=1325 audit(1719332475.376:308): table=filter:104 family=2 entries=14 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:15.393885 kernel: audit: type=1300 audit(1719332475.376:308): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe41dcbf10 a2=0 a3=7ffe41dcbefc items=0 ppid=3494 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:15.393917 kernel: audit: type=1327 audit(1719332475.376:308): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:15.393946 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:21:15.393993 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califd4a57b34d3: link becomes ready Jun 25 16:21:15.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:15.359188 systemd-networkd[1581]: cali7ae64f11034: Gained IPv6LL Jun 25 16:21:15.386678 systemd-networkd[1581]: califd4a57b34d3: Link UP Jun 25 16:21:15.392435 systemd-networkd[1581]: califd4a57b34d3: Gained carrier Jun 25 16:21:15.468093 kernel: audit: type=1325 audit(1719332475.378:309): table=nat:105 family=2 entries=14 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:15.378000 audit[5156]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:15.499619 kubelet[3353]: I0625 16:21:15.498160 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-t67mr" podStartSLOduration=37.498089693 podCreationTimestamp="2024-06-25 16:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:21:15.254357698 +0000 UTC m=+49.015561469" watchObservedRunningTime="2024-06-25 16:21:15.498089693 +0000 UTC m=+49.259293460" Jun 25 16:21:15.378000 audit[5156]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe41dcbf10 a2=0 a3=0 items=0 ppid=3494 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.020 [INFO][5132] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0 coredns-5dd5756b68- kube-system aedaeef2-cca5-497d-9e7c-89ecb9122d1f 725 0 2024-06-25 16:20:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-52 coredns-5dd5756b68-fkctm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd4a57b34d3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Namespace="kube-system" Pod="coredns-5dd5756b68-fkctm" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.021 [INFO][5132] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Namespace="kube-system" Pod="coredns-5dd5756b68-fkctm" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.205 [INFO][5147] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" HandleID="k8s-pod-network.b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.237 [INFO][5147] ipam_plugin.go 264: Auto assigning IP ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" HandleID="k8s-pod-network.b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a0a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-52", "pod":"coredns-5dd5756b68-fkctm", "timestamp":"2024-06-25 16:21:15.205762865 +0000 UTC"}, Hostname:"ip-172-31-30-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.237 [INFO][5147] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.237 [INFO][5147] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.237 [INFO][5147] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-52' Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.242 [INFO][5147] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" host="ip-172-31-30-52" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.267 [INFO][5147] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-52" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.284 [INFO][5147] ipam.go 489: Trying affinity for 192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.289 [INFO][5147] ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.295 [INFO][5147] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.295 [INFO][5147] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" host="ip-172-31-30-52" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.300 [INFO][5147] ipam.go 1685: Creating new handle: k8s-pod-network.b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74 Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.308 [INFO][5147] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" host="ip-172-31-30-52" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.323 [INFO][5147] ipam.go 1216: Successfully claimed IPs: [192.168.50.4/26] block=192.168.50.0/26 handle="k8s-pod-network.b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" host="ip-172-31-30-52" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.323 [INFO][5147] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.4/26] handle="k8s-pod-network.b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" host="ip-172-31-30-52" Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.323 [INFO][5147] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:15.517934 containerd[1894]: 2024-06-25 16:21:15.323 [INFO][5147] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.50.4/26] IPv6=[] ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" HandleID="k8s-pod-network.b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:15.530778 kernel: audit: type=1300 audit(1719332475.378:309): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe41dcbf10 a2=0 a3=0 items=0 ppid=3494 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:15.530845 kernel: audit: type=1327 audit(1719332475.378:309): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:15.378000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:15.530945 containerd[1894]: 2024-06-25 16:21:15.343 [INFO][5132] k8s.go 386: Populated endpoint ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Namespace="kube-system" Pod="coredns-5dd5756b68-fkctm" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aedaeef2-cca5-497d-9e7c-89ecb9122d1f", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"", Pod:"coredns-5dd5756b68-fkctm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd4a57b34d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:15.530945 containerd[1894]: 2024-06-25 16:21:15.343 [INFO][5132] k8s.go 387: Calico CNI using IPs: [192.168.50.4/32] ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Namespace="kube-system" Pod="coredns-5dd5756b68-fkctm" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:15.530945 containerd[1894]: 2024-06-25 16:21:15.343 [INFO][5132] dataplane_linux.go 68: Setting the host side veth name to califd4a57b34d3 ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Namespace="kube-system" Pod="coredns-5dd5756b68-fkctm" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:15.530945 containerd[1894]: 2024-06-25 16:21:15.396 [INFO][5132] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Namespace="kube-system" Pod="coredns-5dd5756b68-fkctm" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:15.530945 containerd[1894]: 2024-06-25 16:21:15.470 [INFO][5132] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Namespace="kube-system" Pod="coredns-5dd5756b68-fkctm" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aedaeef2-cca5-497d-9e7c-89ecb9122d1f", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74", Pod:"coredns-5dd5756b68-fkctm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd4a57b34d3", MAC:"b6:96:f9:6e:a1:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:15.530945 containerd[1894]: 2024-06-25 16:21:15.506 [INFO][5132] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74" Namespace="kube-system" Pod="coredns-5dd5756b68-fkctm" WorkloadEndpoint="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:15.712977 containerd[1894]: time="2024-06-25T16:21:15.712798712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:21:15.713520 containerd[1894]: time="2024-06-25T16:21:15.712995540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:15.713520 containerd[1894]: time="2024-06-25T16:21:15.713100302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:21:15.713520 containerd[1894]: time="2024-06-25T16:21:15.713432313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:15.810000 audit[5199]: NETFILTER_CFG table=filter:106 family=2 entries=38 op=nft_register_chain pid=5199 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:15.816728 kernel: audit: type=1325 audit(1719332475.810:310): table=filter:106 family=2 entries=38 op=nft_register_chain pid=5199 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:15.810000 audit[5199]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffd5cffcda0 a2=0 a3=7ffd5cffcd8c items=0 ppid=4531 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:15.810000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:15.888642 systemd[1]: run-containerd-runc-k8s.io-b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74-runc.1tMaJO.mount: Deactivated successfully. Jun 25 16:21:15.994411 containerd[1894]: time="2024-06-25T16:21:15.994288582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fkctm,Uid:aedaeef2-cca5-497d-9e7c-89ecb9122d1f,Namespace:kube-system,Attempt:1,} returns sandbox id \"b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74\"" Jun 25 16:21:16.005365 containerd[1894]: time="2024-06-25T16:21:16.005327270Z" level=info msg="CreateContainer within sandbox \"b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:21:16.070645 containerd[1894]: time="2024-06-25T16:21:16.070585267Z" level=info msg="CreateContainer within sandbox \"b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e0a5ee618ca47d9e9347d862b210564c3bb83c1018336aa947acbae45f1f49b\"" Jun 25 16:21:16.071784 containerd[1894]: time="2024-06-25T16:21:16.071757505Z" level=info msg="StartContainer for \"4e0a5ee618ca47d9e9347d862b210564c3bb83c1018336aa947acbae45f1f49b\"" Jun 25 16:21:16.290000 audit[5246]: NETFILTER_CFG table=filter:107 family=2 entries=11 op=nft_register_rule pid=5246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:16.290000 audit[5246]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff8b992fc0 a2=0 a3=7fff8b992fac items=0 ppid=3494 pid=5246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:16.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:16.299000 audit[5246]: NETFILTER_CFG table=nat:108 family=2 entries=35 op=nft_register_chain pid=5246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:16.299000 audit[5246]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff8b992fc0 a2=0 a3=7fff8b992fac items=0 ppid=3494 pid=5246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:16.299000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:16.431325 containerd[1894]: time="2024-06-25T16:21:16.431222323Z" level=info msg="StartContainer for \"4e0a5ee618ca47d9e9347d862b210564c3bb83c1018336aa947acbae45f1f49b\" returns successfully" Jun 25 16:21:16.475591 systemd[1]: Started sshd@7-172.31.30.52:22-139.178.89.65:54544.service - OpenSSH per-connection server daemon (139.178.89.65:54544). Jun 25 16:21:16.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.52:22-139.178.89.65:54544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:16.847000 audit[5265]: USER_ACCT pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:16.851663 sshd[5265]: Accepted publickey for core from 139.178.89.65 port 54544 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:16.851000 audit[5265]: CRED_ACQ pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:16.851000 audit[5265]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3a01ad50 a2=3 a3=7fef6acb0480 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:16.851000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:16.855638 sshd[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:16.895546 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:21:16.901724 systemd-logind[1885]: New session 8 of user core. Jun 25 16:21:16.954000 audit[5265]: USER_START pid=5265 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:16.957000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:17.213925 systemd-networkd[1581]: califd4a57b34d3: Gained IPv6LL Jun 25 16:21:17.415616 kubelet[3353]: I0625 16:21:17.413782 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fkctm" podStartSLOduration=39.413675511 podCreationTimestamp="2024-06-25 16:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:21:17.356045137 +0000 UTC m=+51.117248907" watchObservedRunningTime="2024-06-25 16:21:17.413675511 +0000 UTC m=+51.174879280" Jun 25 16:21:17.521000 audit[5280]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:17.521000 audit[5280]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd358eb620 a2=0 a3=7ffd358eb60c items=0 ppid=3494 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:17.521000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:17.532000 audit[5280]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:17.532000 audit[5280]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd358eb620 a2=0 a3=7ffd358eb60c items=0 ppid=3494 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:17.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:17.568000 audit[5282]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:17.568000 audit[5282]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc01962760 a2=0 a3=7ffc0196274c items=0 ppid=3494 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:17.568000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:17.607000 audit[5282]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:17.607000 audit[5282]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc01962760 a2=0 a3=7ffc0196274c items=0 ppid=3494 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:17.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:18.107451 sshd[5265]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:18.109000 audit[5265]: USER_END pid=5265 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:18.109000 audit[5265]: CRED_DISP pid=5265 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:18.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.52:22-139.178.89.65:54544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:18.113328 systemd[1]: sshd@7-172.31.30.52:22-139.178.89.65:54544.service: Deactivated successfully. Jun 25 16:21:18.114652 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:21:18.119414 systemd-logind[1885]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:21:18.122193 systemd-logind[1885]: Removed session 8. Jun 25 16:21:19.041704 containerd[1894]: time="2024-06-25T16:21:19.041412231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:19.043686 containerd[1894]: time="2024-06-25T16:21:19.043587411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:21:19.046085 containerd[1894]: time="2024-06-25T16:21:19.046018648Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:19.089481 containerd[1894]: time="2024-06-25T16:21:19.089433089Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:19.102886 containerd[1894]: time="2024-06-25T16:21:19.102838816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:19.104753 containerd[1894]: time="2024-06-25T16:21:19.104602520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 6.033626117s" Jun 25 16:21:19.104902 containerd[1894]: time="2024-06-25T16:21:19.104756020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:21:19.106939 containerd[1894]: time="2024-06-25T16:21:19.106905003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:21:19.126039 containerd[1894]: time="2024-06-25T16:21:19.125991757Z" level=info msg="CreateContainer within sandbox \"8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:21:19.245989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064288357.mount: Deactivated successfully. Jun 25 16:21:19.286528 containerd[1894]: time="2024-06-25T16:21:19.286174874Z" level=info msg="CreateContainer within sandbox \"8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"eb052608b087ba9dba8d4be30c6bbf6cd47de12379e9581cd7cd4cef43f20d3d\"" Jun 25 16:21:19.287944 containerd[1894]: time="2024-06-25T16:21:19.287905043Z" level=info msg="StartContainer for \"eb052608b087ba9dba8d4be30c6bbf6cd47de12379e9581cd7cd4cef43f20d3d\"" Jun 25 16:21:19.479265 containerd[1894]: time="2024-06-25T16:21:19.479210803Z" level=info msg="StartContainer for \"eb052608b087ba9dba8d4be30c6bbf6cd47de12379e9581cd7cd4cef43f20d3d\" returns successfully" Jun 25 16:21:20.319361 systemd[1]: run-containerd-runc-k8s.io-eb052608b087ba9dba8d4be30c6bbf6cd47de12379e9581cd7cd4cef43f20d3d-runc.C4YMV6.mount: Deactivated successfully. Jun 25 16:21:20.402449 kubelet[3353]: I0625 16:21:20.402248 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7584f4db69-64v2k" podStartSLOduration=29.524080392 podCreationTimestamp="2024-06-25 16:20:44 +0000 UTC" firstStartedPulling="2024-06-25 16:21:12.227363829 +0000 UTC m=+45.988567580" lastFinishedPulling="2024-06-25 16:21:19.105479193 +0000 UTC m=+52.866682954" observedRunningTime="2024-06-25 16:21:20.294165789 +0000 UTC m=+54.055369560" watchObservedRunningTime="2024-06-25 16:21:20.402195766 +0000 UTC m=+54.163399541" Jun 25 16:21:21.522234 containerd[1894]: time="2024-06-25T16:21:21.522184105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:21.532452 containerd[1894]: time="2024-06-25T16:21:21.532394465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:21:21.559965 containerd[1894]: time="2024-06-25T16:21:21.559795505Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:21.570665 containerd[1894]: time="2024-06-25T16:21:21.570622903Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:21.576887 containerd[1894]: time="2024-06-25T16:21:21.576840223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:21.587356 containerd[1894]: time="2024-06-25T16:21:21.587269729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.480301077s" Jun 25 16:21:21.587356 containerd[1894]: time="2024-06-25T16:21:21.587318014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:21:21.592740 containerd[1894]: time="2024-06-25T16:21:21.592703646Z" level=info msg="CreateContainer within sandbox \"69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:21:21.627619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977330102.mount: Deactivated successfully. Jun 25 16:21:21.633859 containerd[1894]: time="2024-06-25T16:21:21.633776287Z" level=info msg="CreateContainer within sandbox \"69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"de89891bdd61fbda83ad5a541c11bd59685de15bf4bc34e78a8e5662c59a53b6\"" Jun 25 16:21:21.635018 containerd[1894]: time="2024-06-25T16:21:21.634980078Z" level=info msg="StartContainer for \"de89891bdd61fbda83ad5a541c11bd59685de15bf4bc34e78a8e5662c59a53b6\"" Jun 25 16:21:21.843624 containerd[1894]: time="2024-06-25T16:21:21.843538252Z" level=info msg="StartContainer for \"de89891bdd61fbda83ad5a541c11bd59685de15bf4bc34e78a8e5662c59a53b6\" returns successfully" Jun 25 16:21:22.180641 kubelet[3353]: I0625 16:21:22.180603 3353 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:21:22.192128 kubelet[3353]: I0625 16:21:22.192039 3353 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:21:22.610492 systemd[1]: run-containerd-runc-k8s.io-de89891bdd61fbda83ad5a541c11bd59685de15bf4bc34e78a8e5662c59a53b6-runc.QeTXZP.mount: Deactivated successfully. Jun 25 16:21:23.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.52:22-139.178.89.65:54560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:23.148429 kernel: kauditd_printk_skb: 31 callbacks suppressed Jun 25 16:21:23.148573 kernel: audit: type=1130 audit(1719332483.144:326): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.52:22-139.178.89.65:54560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:23.144448 systemd[1]: Started sshd@8-172.31.30.52:22-139.178.89.65:54560.service - OpenSSH per-connection server daemon (139.178.89.65:54560). Jun 25 16:21:23.392465 kernel: audit: type=1101 audit(1719332483.379:327): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:23.392606 kernel: audit: type=1103 audit(1719332483.384:328): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:23.392708 kernel: audit: type=1006 audit(1719332483.384:329): pid=5398 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jun 25 16:21:23.379000 audit[5398]: USER_ACCT pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:23.384000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:23.392934 sshd[5398]: Accepted publickey for core from 139.178.89.65 port 54560 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:23.395193 kernel: audit: type=1300 audit(1719332483.384:329): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc25d4c40 a2=3 a3=7f17bc1d1480 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:23.384000 audit[5398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc25d4c40 a2=3 a3=7f17bc1d1480 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:23.394289 sshd[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:23.384000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:23.399091 kernel: audit: type=1327 audit(1719332483.384:329): proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:23.408362 systemd-logind[1885]: New session 9 of user core. Jun 25 16:21:23.416434 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:21:23.428000 audit[5398]: USER_START pid=5398 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:23.434235 kernel: audit: type=1105 audit(1719332483.428:330): pid=5398 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:23.434000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:23.439432 kernel: audit: type=1103 audit(1719332483.434:331): pid=5401 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:24.080014 sshd[5398]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:24.082000 audit[5398]: USER_END pid=5398 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:24.088312 kernel: audit: type=1106 audit(1719332484.082:332): pid=5398 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:24.087000 audit[5398]: CRED_DISP pid=5398 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:24.092090 kernel: audit: type=1104 audit(1719332484.087:333): pid=5398 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:24.095306 systemd-logind[1885]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:21:24.097573 systemd[1]: sshd@8-172.31.30.52:22-139.178.89.65:54560.service: Deactivated successfully. Jun 25 16:21:24.098950 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:21:24.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.52:22-139.178.89.65:54560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:24.102747 systemd-logind[1885]: Removed session 9. Jun 25 16:21:26.614537 containerd[1894]: time="2024-06-25T16:21:26.614492969Z" level=info msg="StopPodSandbox for \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\"" Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.684 [WARNING][5426] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"529c05d3-5db3-465d-a964-613b90de0483", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291", Pod:"coredns-5dd5756b68-t67mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ae64f11034", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.685 [INFO][5426] k8s.go 608: Cleaning up netns ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.685 [INFO][5426] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" iface="eth0" netns="" Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.685 [INFO][5426] k8s.go 615: Releasing IP address(es) ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.685 [INFO][5426] utils.go 188: Calico CNI releasing IP address ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.713 [INFO][5432] ipam_plugin.go 411: Releasing address using handleID ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" HandleID="k8s-pod-network.3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.714 [INFO][5432] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.714 [INFO][5432] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.723 [WARNING][5432] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" HandleID="k8s-pod-network.3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.723 [INFO][5432] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" HandleID="k8s-pod-network.3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.726 [INFO][5432] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:26.730475 containerd[1894]: 2024-06-25 16:21:26.728 [INFO][5426] k8s.go 621: Teardown processing complete. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:26.731206 containerd[1894]: time="2024-06-25T16:21:26.730514405Z" level=info msg="TearDown network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\" successfully" Jun 25 16:21:26.731206 containerd[1894]: time="2024-06-25T16:21:26.730550047Z" level=info msg="StopPodSandbox for \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\" returns successfully" Jun 25 16:21:26.755131 containerd[1894]: time="2024-06-25T16:21:26.732787918Z" level=info msg="RemovePodSandbox for \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\"" Jun 25 16:21:26.755335 containerd[1894]: time="2024-06-25T16:21:26.755142765Z" level=info msg="Forcibly stopping sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\"" Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.806 [WARNING][5450] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"529c05d3-5db3-465d-a964-613b90de0483", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"ea76699bfa0eef5eab9fb7d34ba46e803592e3515fea4df6fbd9f02c41a85291", Pod:"coredns-5dd5756b68-t67mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ae64f11034", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.807 [INFO][5450] k8s.go 608: Cleaning up netns ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.807 [INFO][5450] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" iface="eth0" netns="" Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.807 [INFO][5450] k8s.go 615: Releasing IP address(es) ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.807 [INFO][5450] utils.go 188: Calico CNI releasing IP address ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.851 [INFO][5458] ipam_plugin.go 411: Releasing address using handleID ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" HandleID="k8s-pod-network.3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.852 [INFO][5458] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.853 [INFO][5458] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.862 [WARNING][5458] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" HandleID="k8s-pod-network.3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.862 [INFO][5458] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" HandleID="k8s-pod-network.3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--t67mr-eth0" Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.864 [INFO][5458] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:26.870335 containerd[1894]: 2024-06-25 16:21:26.866 [INFO][5450] k8s.go 621: Teardown processing complete. ContainerID="3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33" Jun 25 16:21:26.870335 containerd[1894]: time="2024-06-25T16:21:26.869231628Z" level=info msg="TearDown network for sandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\" successfully" Jun 25 16:21:26.886363 containerd[1894]: time="2024-06-25T16:21:26.886306605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:21:26.889875 containerd[1894]: time="2024-06-25T16:21:26.889803003Z" level=info msg="RemovePodSandbox \"3f46bb888a6f806201d9632124f43186b0cdedcf78070d76ae715abea54e0b33\" returns successfully" Jun 25 16:21:26.890701 containerd[1894]: time="2024-06-25T16:21:26.890672004Z" level=info msg="StopPodSandbox for \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\"" Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:26.955 [WARNING][5479] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7047a1f0-d856-4ebc-9890-f4acf3b2eb78", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc", Pod:"csi-node-driver-l82ff", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali19077955ae1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:26.955 [INFO][5479] k8s.go 608: Cleaning up netns ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:26.956 [INFO][5479] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" iface="eth0" netns="" Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:26.956 [INFO][5479] k8s.go 615: Releasing IP address(es) ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:26.956 [INFO][5479] utils.go 188: Calico CNI releasing IP address ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:26.988 [INFO][5486] ipam_plugin.go 411: Releasing address using handleID ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" HandleID="k8s-pod-network.422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:26.989 [INFO][5486] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:26.989 [INFO][5486] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:27.003 [WARNING][5486] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" HandleID="k8s-pod-network.422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:27.003 [INFO][5486] ipam_plugin.go 439: Releasing address using workloadID ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" HandleID="k8s-pod-network.422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:27.005 [INFO][5486] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:27.009996 containerd[1894]: 2024-06-25 16:21:27.007 [INFO][5479] k8s.go 621: Teardown processing complete. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:27.010866 containerd[1894]: time="2024-06-25T16:21:27.010036241Z" level=info msg="TearDown network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\" successfully" Jun 25 16:21:27.010866 containerd[1894]: time="2024-06-25T16:21:27.010099201Z" level=info msg="StopPodSandbox for \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\" returns successfully" Jun 25 16:21:27.010866 containerd[1894]: time="2024-06-25T16:21:27.010682172Z" level=info msg="RemovePodSandbox for \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\"" Jun 25 16:21:27.010866 containerd[1894]: time="2024-06-25T16:21:27.010722909Z" level=info msg="Forcibly stopping sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\"" Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.054 [WARNING][5506] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7047a1f0-d856-4ebc-9890-f4acf3b2eb78", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"69e8eb95fe7a70053fa09e547fb9096bed9b0d2f63881d4c149d3c4dc053f1bc", Pod:"csi-node-driver-l82ff", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali19077955ae1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.055 [INFO][5506] k8s.go 608: Cleaning up netns ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.055 [INFO][5506] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" iface="eth0" netns="" Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.055 [INFO][5506] k8s.go 615: Releasing IP address(es) ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.055 [INFO][5506] utils.go 188: Calico CNI releasing IP address ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.109 [INFO][5512] ipam_plugin.go 411: Releasing address using handleID ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" HandleID="k8s-pod-network.422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.110 [INFO][5512] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.110 [INFO][5512] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.130 [WARNING][5512] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" HandleID="k8s-pod-network.422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.130 [INFO][5512] ipam_plugin.go 439: Releasing address using workloadID ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" HandleID="k8s-pod-network.422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Workload="ip--172--31--30--52-k8s-csi--node--driver--l82ff-eth0" Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.133 [INFO][5512] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:27.143323 containerd[1894]: 2024-06-25 16:21:27.136 [INFO][5506] k8s.go 621: Teardown processing complete. ContainerID="422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c" Jun 25 16:21:27.143323 containerd[1894]: time="2024-06-25T16:21:27.139033065Z" level=info msg="TearDown network for sandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\" successfully" Jun 25 16:21:27.147842 containerd[1894]: time="2024-06-25T16:21:27.147808532Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:21:27.148004 containerd[1894]: time="2024-06-25T16:21:27.147884901Z" level=info msg="RemovePodSandbox \"422bb115c8fcde87aa126bc3fa09b27fb0a2853206eb8f405008967788e9642c\" returns successfully" Jun 25 16:21:27.148764 containerd[1894]: time="2024-06-25T16:21:27.148727023Z" level=info msg="StopPodSandbox for \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\"" Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.193 [WARNING][5530] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0", GenerateName:"calico-kube-controllers-7584f4db69-", Namespace:"calico-system", SelfLink:"", UID:"fb951166-44c7-4dc8-b31a-6c99194a7411", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7584f4db69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6", Pod:"calico-kube-controllers-7584f4db69-64v2k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f4045df68b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.194 [INFO][5530] k8s.go 608: Cleaning up netns ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.194 [INFO][5530] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" iface="eth0" netns="" Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.194 [INFO][5530] k8s.go 615: Releasing IP address(es) ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.194 [INFO][5530] utils.go 188: Calico CNI releasing IP address ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.232 [INFO][5536] ipam_plugin.go 411: Releasing address using handleID ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" HandleID="k8s-pod-network.80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.232 [INFO][5536] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.232 [INFO][5536] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.239 [WARNING][5536] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" HandleID="k8s-pod-network.80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.240 [INFO][5536] ipam_plugin.go 439: Releasing address using workloadID ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" HandleID="k8s-pod-network.80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.244 [INFO][5536] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:27.250270 containerd[1894]: 2024-06-25 16:21:27.247 [INFO][5530] k8s.go 621: Teardown processing complete. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:27.251578 containerd[1894]: time="2024-06-25T16:21:27.250304770Z" level=info msg="TearDown network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\" successfully" Jun 25 16:21:27.251578 containerd[1894]: time="2024-06-25T16:21:27.250341667Z" level=info msg="StopPodSandbox for \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\" returns successfully" Jun 25 16:21:27.251578 containerd[1894]: time="2024-06-25T16:21:27.250946334Z" level=info msg="RemovePodSandbox for \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\"" Jun 25 16:21:27.251578 containerd[1894]: time="2024-06-25T16:21:27.250987992Z" level=info msg="Forcibly stopping sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\"" Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.307 [WARNING][5555] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0", GenerateName:"calico-kube-controllers-7584f4db69-", Namespace:"calico-system", SelfLink:"", UID:"fb951166-44c7-4dc8-b31a-6c99194a7411", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7584f4db69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"8c7a7608b6304b8e2804e7390e60e8ae8adf4c7d6ce9b13612ecbbfb1899eea6", Pod:"calico-kube-controllers-7584f4db69-64v2k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f4045df68b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.307 [INFO][5555] k8s.go 608: Cleaning up netns ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.307 [INFO][5555] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" iface="eth0" netns="" Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.308 [INFO][5555] k8s.go 615: Releasing IP address(es) ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.308 [INFO][5555] utils.go 188: Calico CNI releasing IP address ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.333 [INFO][5561] ipam_plugin.go 411: Releasing address using handleID ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" HandleID="k8s-pod-network.80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.333 [INFO][5561] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.334 [INFO][5561] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.340 [WARNING][5561] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" HandleID="k8s-pod-network.80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.340 [INFO][5561] ipam_plugin.go 439: Releasing address using workloadID ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" HandleID="k8s-pod-network.80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Workload="ip--172--31--30--52-k8s-calico--kube--controllers--7584f4db69--64v2k-eth0" Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.342 [INFO][5561] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:27.346533 containerd[1894]: 2024-06-25 16:21:27.344 [INFO][5555] k8s.go 621: Teardown processing complete. ContainerID="80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca" Jun 25 16:21:27.347759 containerd[1894]: time="2024-06-25T16:21:27.346565605Z" level=info msg="TearDown network for sandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\" successfully" Jun 25 16:21:27.352996 containerd[1894]: time="2024-06-25T16:21:27.352943237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:21:27.353174 containerd[1894]: time="2024-06-25T16:21:27.353034122Z" level=info msg="RemovePodSandbox \"80c652f9fc39e773422d001ea6e45512e0c924e6e9df0419a263954dd36d3eca\" returns successfully" Jun 25 16:21:27.353675 containerd[1894]: time="2024-06-25T16:21:27.353631889Z" level=info msg="StopPodSandbox for \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\"" Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.407 [WARNING][5579] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aedaeef2-cca5-497d-9e7c-89ecb9122d1f", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74", Pod:"coredns-5dd5756b68-fkctm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd4a57b34d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.407 [INFO][5579] k8s.go 608: Cleaning up netns ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.407 [INFO][5579] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" iface="eth0" netns="" Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.407 [INFO][5579] k8s.go 615: Releasing IP address(es) ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.407 [INFO][5579] utils.go 188: Calico CNI releasing IP address ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.446 [INFO][5585] ipam_plugin.go 411: Releasing address using handleID ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" HandleID="k8s-pod-network.bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.446 [INFO][5585] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.446 [INFO][5585] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.453 [WARNING][5585] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" HandleID="k8s-pod-network.bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.453 [INFO][5585] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" HandleID="k8s-pod-network.bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.455 [INFO][5585] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:27.459148 containerd[1894]: 2024-06-25 16:21:27.457 [INFO][5579] k8s.go 621: Teardown processing complete. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:27.459871 containerd[1894]: time="2024-06-25T16:21:27.459191660Z" level=info msg="TearDown network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\" successfully" Jun 25 16:21:27.459871 containerd[1894]: time="2024-06-25T16:21:27.459229689Z" level=info msg="StopPodSandbox for \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\" returns successfully" Jun 25 16:21:27.459871 containerd[1894]: time="2024-06-25T16:21:27.459737849Z" level=info msg="RemovePodSandbox for \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\"" Jun 25 16:21:27.459871 containerd[1894]: time="2024-06-25T16:21:27.459776792Z" level=info msg="Forcibly stopping sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\"" Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.510 [WARNING][5604] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aedaeef2-cca5-497d-9e7c-89ecb9122d1f", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"b1d610c53bc4636975d80a108d2ff17562e28ed58f5636aec24b183ed5281b74", Pod:"coredns-5dd5756b68-fkctm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd4a57b34d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.511 [INFO][5604] k8s.go 608: Cleaning up netns ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.511 [INFO][5604] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" iface="eth0" netns="" Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.511 [INFO][5604] k8s.go 615: Releasing IP address(es) ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.511 [INFO][5604] utils.go 188: Calico CNI releasing IP address ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.539 [INFO][5611] ipam_plugin.go 411: Releasing address using handleID ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" HandleID="k8s-pod-network.bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.540 [INFO][5611] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.540 [INFO][5611] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.547 [WARNING][5611] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" HandleID="k8s-pod-network.bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.548 [INFO][5611] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" HandleID="k8s-pod-network.bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Workload="ip--172--31--30--52-k8s-coredns--5dd5756b68--fkctm-eth0" Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.549 [INFO][5611] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:27.554186 containerd[1894]: 2024-06-25 16:21:27.551 [INFO][5604] k8s.go 621: Teardown processing complete. ContainerID="bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9" Jun 25 16:21:27.555732 containerd[1894]: time="2024-06-25T16:21:27.554242341Z" level=info msg="TearDown network for sandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\" successfully" Jun 25 16:21:27.560285 containerd[1894]: time="2024-06-25T16:21:27.560188657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:21:27.560547 containerd[1894]: time="2024-06-25T16:21:27.560500342Z" level=info msg="RemovePodSandbox \"bcdbfe447c08a7fe013a5b8c5d42ca86a024b922d4de4a524f3d6af5e5b158a9\" returns successfully" Jun 25 16:21:28.163175 systemd[1]: run-containerd-runc-k8s.io-eb052608b087ba9dba8d4be30c6bbf6cd47de12379e9581cd7cd4cef43f20d3d-runc.RgC1NY.mount: Deactivated successfully. Jun 25 16:21:29.117689 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:21:29.117855 kernel: audit: type=1130 audit(1719332489.108:335): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.52:22-139.178.89.65:35842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:29.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.52:22-139.178.89.65:35842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:29.109822 systemd[1]: Started sshd@9-172.31.30.52:22-139.178.89.65:35842.service - OpenSSH per-connection server daemon (139.178.89.65:35842). Jun 25 16:21:29.335683 kernel: audit: type=1101 audit(1719332489.322:336): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.339426 kernel: audit: type=1103 audit(1719332489.323:337): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.339547 kernel: audit: type=1006 audit(1719332489.324:338): pid=5637 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:21:29.322000 audit[5637]: USER_ACCT pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.323000 audit[5637]: CRED_ACQ pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.326096 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:29.340152 sshd[5637]: Accepted publickey for core from 139.178.89.65 port 35842 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:29.324000 audit[5637]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff47012460 a2=3 a3=7f6e55871480 items=0 ppid=1 pid=5637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:29.348398 kernel: audit: type=1300 audit(1719332489.324:338): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff47012460 a2=3 a3=7f6e55871480 items=0 ppid=1 pid=5637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:29.324000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:29.349130 systemd-logind[1885]: New session 10 of user core. Jun 25 16:21:29.354656 kernel: audit: type=1327 audit(1719332489.324:338): proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:29.354848 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:21:29.371000 audit[5637]: USER_START pid=5637 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.376175 kernel: audit: type=1105 audit(1719332489.371:339): pid=5637 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.377000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.381106 kernel: audit: type=1103 audit(1719332489.377:340): pid=5640 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.615337 sshd[5637]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:29.618000 audit[5637]: USER_END pid=5637 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.629258 kernel: audit: type=1106 audit(1719332489.618:341): pid=5637 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.643100 kernel: audit: type=1104 audit(1719332489.632:342): pid=5637 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.632000 audit[5637]: CRED_DISP pid=5637 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:29.656037 systemd-logind[1885]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:21:29.658997 systemd[1]: sshd@9-172.31.30.52:22-139.178.89.65:35842.service: Deactivated successfully. Jun 25 16:21:29.660245 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:21:29.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.52:22-139.178.89.65:35842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:29.668534 systemd-logind[1885]: Removed session 10. Jun 25 16:21:34.641784 systemd[1]: Started sshd@10-172.31.30.52:22-139.178.89.65:35850.service - OpenSSH per-connection server daemon (139.178.89.65:35850). Jun 25 16:21:34.647418 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:21:34.647541 kernel: audit: type=1130 audit(1719332494.642:344): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.52:22-139.178.89.65:35850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:34.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.52:22-139.178.89.65:35850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:34.799000 audit[5661]: USER_ACCT pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:34.800467 sshd[5661]: Accepted publickey for core from 139.178.89.65 port 35850 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:34.801000 audit[5661]: CRED_ACQ pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:34.803480 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:34.805887 kernel: audit: type=1101 audit(1719332494.799:345): pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:34.805980 kernel: audit: type=1103 audit(1719332494.801:346): pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:34.806009 kernel: audit: type=1006 audit(1719332494.801:347): pid=5661 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:21:34.801000 audit[5661]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff79d2e0f0 a2=3 a3=7f53794b9480 items=0 ppid=1 pid=5661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:34.814294 kernel: audit: type=1300 audit(1719332494.801:347): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff79d2e0f0 a2=3 a3=7f53794b9480 items=0 ppid=1 pid=5661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:34.814416 kernel: audit: type=1327 audit(1719332494.801:347): proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:34.801000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:34.823522 systemd-logind[1885]: New session 11 of user core. Jun 25 16:21:34.827737 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:21:34.838000 audit[5661]: USER_START pid=5661 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:34.843175 kernel: audit: type=1105 audit(1719332494.838:348): pid=5661 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:34.847237 kernel: audit: type=1103 audit(1719332494.843:349): pid=5664 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:34.843000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:35.071770 sshd[5661]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:35.077000 audit[5661]: USER_END pid=5661 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:35.078000 audit[5661]: CRED_DISP pid=5661 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:35.085237 kernel: audit: type=1106 audit(1719332495.077:350): pid=5661 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:35.085330 kernel: audit: type=1104 audit(1719332495.078:351): pid=5661 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:35.082929 systemd[1]: sshd@10-172.31.30.52:22-139.178.89.65:35850.service: Deactivated successfully. Jun 25 16:21:35.084327 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:21:35.085877 systemd-logind[1885]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:21:35.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.52:22-139.178.89.65:35850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:35.087380 systemd-logind[1885]: Removed session 11. Jun 25 16:21:35.103796 systemd[1]: Started sshd@11-172.31.30.52:22-139.178.89.65:35864.service - OpenSSH per-connection server daemon (139.178.89.65:35864). Jun 25 16:21:35.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.52:22-139.178.89.65:35864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:35.277000 audit[5674]: USER_ACCT pid=5674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:35.278864 sshd[5674]: Accepted publickey for core from 139.178.89.65 port 35864 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:35.286000 audit[5674]: CRED_ACQ pid=5674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:35.287000 audit[5674]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfba46720 a2=3 a3=7f224267b480 items=0 ppid=1 pid=5674 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:35.287000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:35.288606 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:35.301186 systemd-logind[1885]: New session 12 of user core. Jun 25 16:21:35.307605 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:21:35.328000 audit[5674]: USER_START pid=5674 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:35.331000 audit[5677]: CRED_ACQ pid=5677 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:36.019472 sshd[5674]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:36.022000 audit[5674]: USER_END pid=5674 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:36.022000 audit[5674]: CRED_DISP pid=5674 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:36.029695 systemd[1]: sshd@11-172.31.30.52:22-139.178.89.65:35864.service: Deactivated successfully. Jun 25 16:21:36.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.52:22-139.178.89.65:35864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:36.033458 systemd-logind[1885]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:21:36.045172 systemd[1]: Started sshd@12-172.31.30.52:22-139.178.89.65:59570.service - OpenSSH per-connection server daemon (139.178.89.65:59570). Jun 25 16:21:36.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.52:22-139.178.89.65:59570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:36.045942 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:21:36.080611 systemd-logind[1885]: Removed session 12. Jun 25 16:21:36.257000 audit[5687]: USER_ACCT pid=5687 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:36.258548 sshd[5687]: Accepted publickey for core from 139.178.89.65 port 59570 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:36.258000 audit[5687]: CRED_ACQ pid=5687 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:36.258000 audit[5687]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde76dcec0 a2=3 a3=7fd5ae3f7480 items=0 ppid=1 pid=5687 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:36.258000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:36.261050 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:36.268797 systemd-logind[1885]: New session 13 of user core. Jun 25 16:21:36.271367 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:21:36.279000 audit[5687]: USER_START pid=5687 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:36.282000 audit[5690]: CRED_ACQ pid=5690 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:36.588564 sshd[5687]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:36.592000 audit[5687]: USER_END pid=5687 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:36.592000 audit[5687]: CRED_DISP pid=5687 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:36.596878 systemd[1]: sshd@12-172.31.30.52:22-139.178.89.65:59570.service: Deactivated successfully. Jun 25 16:21:36.598107 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:21:36.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.52:22-139.178.89.65:59570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:36.598950 systemd-logind[1885]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:21:36.600958 systemd-logind[1885]: Removed session 13. Jun 25 16:21:37.256307 systemd[1]: run-containerd-runc-k8s.io-fbfff568badc418d60e33f210d56b9cd9cc2b1b00493da21232f3dfd4671cc83-runc.L6rcYu.mount: Deactivated successfully. Jun 25 16:21:37.423531 kubelet[3353]: I0625 16:21:37.423468 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-l82ff" podStartSLOduration=42.902445337 podCreationTimestamp="2024-06-25 16:20:44 +0000 UTC" firstStartedPulling="2024-06-25 16:21:11.066917967 +0000 UTC m=+44.828121714" lastFinishedPulling="2024-06-25 16:21:21.587863388 +0000 UTC m=+55.349067144" observedRunningTime="2024-06-25 16:21:22.28463696 +0000 UTC m=+56.045840732" watchObservedRunningTime="2024-06-25 16:21:37.423390767 +0000 UTC m=+71.184594538" Jun 25 16:21:41.639592 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:21:41.640262 kernel: audit: type=1130 audit(1719332501.630:371): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.52:22-139.178.89.65:59574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:41.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.52:22-139.178.89.65:59574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:41.630753 systemd[1]: Started sshd@13-172.31.30.52:22-139.178.89.65:59574.service - OpenSSH per-connection server daemon (139.178.89.65:59574). Jun 25 16:21:41.823000 audit[5725]: USER_ACCT pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:41.828235 kernel: audit: type=1101 audit(1719332501.823:372): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:41.828833 sshd[5725]: Accepted publickey for core from 139.178.89.65 port 59574 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:41.852923 kernel: audit: type=1103 audit(1719332501.830:373): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:41.853053 kernel: audit: type=1006 audit(1719332501.830:374): pid=5725 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 16:21:41.853141 kernel: audit: type=1300 audit(1719332501.830:374): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe405ea580 a2=3 a3=7ff6c7ea6480 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:41.830000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:41.830000 audit[5725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe405ea580 a2=3 a3=7ff6c7ea6480 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:41.835208 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:41.830000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:41.860413 kernel: audit: type=1327 audit(1719332501.830:374): proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:41.864620 systemd-logind[1885]: New session 14 of user core. Jun 25 16:21:41.868477 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:21:41.882000 audit[5725]: USER_START pid=5725 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:41.889748 kernel: audit: type=1105 audit(1719332501.882:375): pid=5725 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:41.888000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:41.894550 kernel: audit: type=1103 audit(1719332501.888:376): pid=5735 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:42.212836 sshd[5725]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:42.213000 audit[5725]: USER_END pid=5725 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:42.213000 audit[5725]: CRED_DISP pid=5725 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:42.219723 kernel: audit: type=1106 audit(1719332502.213:377): pid=5725 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:42.219807 kernel: audit: type=1104 audit(1719332502.213:378): pid=5725 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:42.225411 systemd[1]: sshd@13-172.31.30.52:22-139.178.89.65:59574.service: Deactivated successfully. Jun 25 16:21:42.226770 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:21:42.227126 systemd-logind[1885]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:21:42.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.52:22-139.178.89.65:59574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:42.228676 systemd-logind[1885]: Removed session 14. Jun 25 16:21:47.242084 systemd[1]: Started sshd@14-172.31.30.52:22-139.178.89.65:55436.service - OpenSSH per-connection server daemon (139.178.89.65:55436). Jun 25 16:21:47.246742 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:21:47.246852 kernel: audit: type=1130 audit(1719332507.241:380): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.52:22-139.178.89.65:55436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:47.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.52:22-139.178.89.65:55436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:47.410318 kernel: audit: type=1101 audit(1719332507.398:381): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.410445 kernel: audit: type=1103 audit(1719332507.402:382): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.410478 kernel: audit: type=1006 audit(1719332507.402:383): pid=5747 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:21:47.398000 audit[5747]: USER_ACCT pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.402000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.416038 kernel: audit: type=1300 audit(1719332507.402:383): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe90506940 a2=3 a3=7f52c34f7480 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:47.416137 kernel: audit: type=1327 audit(1719332507.402:383): proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:47.402000 audit[5747]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe90506940 a2=3 a3=7f52c34f7480 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:47.402000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:47.404020 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:47.435951 sshd[5747]: Accepted publickey for core from 139.178.89.65 port 55436 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:47.436318 systemd-logind[1885]: New session 15 of user core. Jun 25 16:21:47.442624 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:21:47.466321 kernel: audit: type=1105 audit(1719332507.460:384): pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.460000 audit[5747]: USER_START pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.465000 audit[5750]: CRED_ACQ pid=5750 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.470528 kernel: audit: type=1103 audit(1719332507.465:385): pid=5750 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.798634 sshd[5747]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:47.800000 audit[5747]: USER_END pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.801000 audit[5747]: CRED_DISP pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.805700 systemd[1]: sshd@14-172.31.30.52:22-139.178.89.65:55436.service: Deactivated successfully. Jun 25 16:21:47.806993 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:21:47.808614 kernel: audit: type=1106 audit(1719332507.800:386): pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.808668 kernel: audit: type=1104 audit(1719332507.801:387): pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:47.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.52:22-139.178.89.65:55436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:47.814090 systemd-logind[1885]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:21:47.823023 systemd-logind[1885]: Removed session 15. Jun 25 16:21:52.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.52:22-139.178.89.65:55440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:52.828662 systemd[1]: Started sshd@15-172.31.30.52:22-139.178.89.65:55440.service - OpenSSH per-connection server daemon (139.178.89.65:55440). Jun 25 16:21:52.832767 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:21:52.832853 kernel: audit: type=1130 audit(1719332512.828:389): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.52:22-139.178.89.65:55440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:53.008000 audit[5771]: USER_ACCT pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.011753 sshd[5771]: Accepted publickey for core from 139.178.89.65 port 55440 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:53.008000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.018429 kernel: audit: type=1101 audit(1719332513.008:390): pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.018650 kernel: audit: type=1103 audit(1719332513.008:391): pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.018697 kernel: audit: type=1006 audit(1719332513.008:392): pid=5771 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:21:53.019513 sshd[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:53.020613 kernel: audit: type=1300 audit(1719332513.008:392): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff06589d10 a2=3 a3=7f97cd30f480 items=0 ppid=1 pid=5771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:53.008000 audit[5771]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff06589d10 a2=3 a3=7f97cd30f480 items=0 ppid=1 pid=5771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:53.008000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:53.024435 kernel: audit: type=1327 audit(1719332513.008:392): proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:53.033439 systemd-logind[1885]: New session 16 of user core. Jun 25 16:21:53.036489 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:21:53.045000 audit[5771]: USER_START pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.045000 audit[5774]: CRED_ACQ pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.051770 kernel: audit: type=1105 audit(1719332513.045:393): pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.052003 kernel: audit: type=1103 audit(1719332513.045:394): pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.523744 sshd[5771]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:53.524000 audit[5771]: USER_END pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.526000 audit[5771]: CRED_DISP pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.534092 kernel: audit: type=1106 audit(1719332513.524:395): pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.534204 kernel: audit: type=1104 audit(1719332513.526:396): pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:53.531911 systemd[1]: sshd@15-172.31.30.52:22-139.178.89.65:55440.service: Deactivated successfully. Jun 25 16:21:53.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.52:22-139.178.89.65:55440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:53.535680 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:21:53.535708 systemd-logind[1885]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:21:53.539414 systemd-logind[1885]: Removed session 16. Jun 25 16:21:58.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.52:22-139.178.89.65:53020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:58.551793 systemd[1]: Started sshd@16-172.31.30.52:22-139.178.89.65:53020.service - OpenSSH per-connection server daemon (139.178.89.65:53020). Jun 25 16:21:58.553705 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:21:58.553758 kernel: audit: type=1130 audit(1719332518.550:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.52:22-139.178.89.65:53020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:58.709000 audit[5805]: USER_ACCT pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:58.710670 sshd[5805]: Accepted publickey for core from 139.178.89.65 port 53020 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:58.713021 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:58.709000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:58.715660 kernel: audit: type=1101 audit(1719332518.709:399): pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:58.715759 kernel: audit: type=1103 audit(1719332518.709:400): pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:58.738577 kernel: audit: type=1006 audit(1719332518.709:401): pid=5805 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:21:58.738685 kernel: audit: type=1300 audit(1719332518.709:401): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff41f8e1b0 a2=3 a3=7fd9071c5480 items=0 ppid=1 pid=5805 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:58.738717 kernel: audit: type=1327 audit(1719332518.709:401): proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:58.709000 audit[5805]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff41f8e1b0 a2=3 a3=7fd9071c5480 items=0 ppid=1 pid=5805 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:58.709000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:58.769908 systemd-logind[1885]: New session 17 of user core. Jun 25 16:21:58.773974 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:21:58.805374 kernel: audit: type=1105 audit(1719332518.798:402): pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:58.798000 audit[5805]: USER_START pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:58.804000 audit[5808]: CRED_ACQ pid=5808 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:58.811145 kernel: audit: type=1103 audit(1719332518.804:403): pid=5808 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:59.107381 sshd[5805]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:59.112000 audit[5805]: USER_END pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:59.119835 kernel: audit: type=1106 audit(1719332519.112:404): pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:59.119937 kernel: audit: type=1104 audit(1719332519.115:405): pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:59.115000 audit[5805]: CRED_DISP pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:59.119241 systemd[1]: sshd@16-172.31.30.52:22-139.178.89.65:53020.service: Deactivated successfully. Jun 25 16:21:59.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.52:22-139.178.89.65:53020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:59.121289 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:21:59.122112 systemd-logind[1885]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:21:59.123940 systemd-logind[1885]: Removed session 17. Jun 25 16:21:59.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.30.52:22-139.178.89.65:53036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:59.135756 systemd[1]: Started sshd@17-172.31.30.52:22-139.178.89.65:53036.service - OpenSSH per-connection server daemon (139.178.89.65:53036). Jun 25 16:21:59.304000 audit[5817]: USER_ACCT pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:59.306297 sshd[5817]: Accepted publickey for core from 139.178.89.65 port 53036 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:21:59.308000 audit[5817]: CRED_ACQ pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:59.311000 audit[5817]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9bed6160 a2=3 a3=7fd65716d480 items=0 ppid=1 pid=5817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:59.311000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:59.312930 sshd[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:59.334230 systemd-logind[1885]: New session 18 of user core. Jun 25 16:21:59.337539 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:21:59.353000 audit[5817]: USER_START pid=5817 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:21:59.356000 audit[5820]: CRED_ACQ pid=5820 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:00.164607 kubelet[3353]: I0625 16:22:00.164542 3353 topology_manager.go:215] "Topology Admit Handler" podUID="8b1efd61-0b38-4f98-b407-b19e359ea0d0" podNamespace="calico-apiserver" podName="calico-apiserver-57bbb8976b-8djwh" Jun 25 16:22:00.280386 kubelet[3353]: I0625 16:22:00.280207 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llpkk\" (UniqueName: \"kubernetes.io/projected/8b1efd61-0b38-4f98-b407-b19e359ea0d0-kube-api-access-llpkk\") pod \"calico-apiserver-57bbb8976b-8djwh\" (UID: \"8b1efd61-0b38-4f98-b407-b19e359ea0d0\") " pod="calico-apiserver/calico-apiserver-57bbb8976b-8djwh" Jun 25 16:22:00.280739 kubelet[3353]: I0625 16:22:00.280721 3353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b1efd61-0b38-4f98-b407-b19e359ea0d0-calico-apiserver-certs\") pod \"calico-apiserver-57bbb8976b-8djwh\" (UID: \"8b1efd61-0b38-4f98-b407-b19e359ea0d0\") " pod="calico-apiserver/calico-apiserver-57bbb8976b-8djwh" Jun 25 16:22:00.350000 audit[5827]: NETFILTER_CFG table=filter:113 family=2 entries=8 op=nft_register_rule pid=5827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:00.350000 audit[5827]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd22159b80 a2=0 a3=7ffd22159b6c items=0 ppid=3494 pid=5827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:00.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:00.356000 audit[5827]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=5827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:00.356000 audit[5827]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd22159b80 a2=0 a3=7ffd22159b6c items=0 ppid=3494 pid=5827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:00.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:00.382048 kubelet[3353]: E0625 16:22:00.381933 3353 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:22:00.397251 kubelet[3353]: E0625 16:22:00.397207 3353 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b1efd61-0b38-4f98-b407-b19e359ea0d0-calico-apiserver-certs podName:8b1efd61-0b38-4f98-b407-b19e359ea0d0 nodeName:}" failed. No retries permitted until 2024-06-25 16:22:00.882100185 +0000 UTC m=+94.643303968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8b1efd61-0b38-4f98-b407-b19e359ea0d0-calico-apiserver-certs") pod "calico-apiserver-57bbb8976b-8djwh" (UID: "8b1efd61-0b38-4f98-b407-b19e359ea0d0") : secret "calico-apiserver-certs" not found Jun 25 16:22:00.470000 audit[5830]: NETFILTER_CFG table=filter:115 family=2 entries=9 op=nft_register_rule pid=5830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:00.470000 audit[5830]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc91322840 a2=0 a3=7ffc9132282c items=0 ppid=3494 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:00.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:00.476000 audit[5830]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:00.476000 audit[5830]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc91322840 a2=0 a3=7ffc9132282c items=0 ppid=3494 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:00.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:00.524572 sshd[5817]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:00.525000 audit[5817]: USER_END pid=5817 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:00.525000 audit[5817]: CRED_DISP pid=5817 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:00.529788 systemd[1]: sshd@17-172.31.30.52:22-139.178.89.65:53036.service: Deactivated successfully. Jun 25 16:22:00.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.30.52:22-139.178.89.65:53036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:00.532498 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:22:00.532707 systemd-logind[1885]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:22:00.536425 systemd-logind[1885]: Removed session 18. Jun 25 16:22:00.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.30.52:22-139.178.89.65:53048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:00.549607 systemd[1]: Started sshd@18-172.31.30.52:22-139.178.89.65:53048.service - OpenSSH per-connection server daemon (139.178.89.65:53048). Jun 25 16:22:00.748000 audit[5833]: USER_ACCT pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:00.750381 sshd[5833]: Accepted publickey for core from 139.178.89.65 port 53048 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:00.755000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:00.755000 audit[5833]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0ed58010 a2=3 a3=7f55a689a480 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:00.755000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:00.758394 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:00.797072 systemd-logind[1885]: New session 19 of user core. Jun 25 16:22:00.799797 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:22:00.806000 audit[5833]: USER_START pid=5833 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:00.809000 audit[5836]: CRED_ACQ pid=5836 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:01.191158 containerd[1894]: time="2024-06-25T16:22:01.191007471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bbb8976b-8djwh,Uid:8b1efd61-0b38-4f98-b407-b19e359ea0d0,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:22:01.952368 (udev-worker)[5869]: Network interface NamePolicy= disabled on kernel command line. Jun 25 16:22:01.955273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:22:01.955499 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali419bc8bb582: link becomes ready Jun 25 16:22:01.952599 systemd-networkd[1581]: cali419bc8bb582: Link UP Jun 25 16:22:01.955584 systemd-networkd[1581]: cali419bc8bb582: Gained carrier Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.496 [INFO][5843] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0 calico-apiserver-57bbb8976b- calico-apiserver 8b1efd61-0b38-4f98-b407-b19e359ea0d0 1036 0 2024-06-25 16:21:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57bbb8976b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-52 calico-apiserver-57bbb8976b-8djwh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali419bc8bb582 [] []}} ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Namespace="calico-apiserver" Pod="calico-apiserver-57bbb8976b-8djwh" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.496 [INFO][5843] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Namespace="calico-apiserver" Pod="calico-apiserver-57bbb8976b-8djwh" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.823 [INFO][5857] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" HandleID="k8s-pod-network.631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Workload="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.843 [INFO][5857] ipam_plugin.go 264: Auto assigning IP ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" HandleID="k8s-pod-network.631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Workload="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f1c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-52", "pod":"calico-apiserver-57bbb8976b-8djwh", "timestamp":"2024-06-25 16:22:01.823525658 +0000 UTC"}, Hostname:"ip-172-31-30-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.843 [INFO][5857] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.843 [INFO][5857] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.843 [INFO][5857] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-52' Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.848 [INFO][5857] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" host="ip-172-31-30-52" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.858 [INFO][5857] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-52" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.871 [INFO][5857] ipam.go 489: Trying affinity for 192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.879 [INFO][5857] ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.883 [INFO][5857] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ip-172-31-30-52" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.883 [INFO][5857] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" host="ip-172-31-30-52" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.886 [INFO][5857] ipam.go 1685: Creating new handle: k8s-pod-network.631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1 Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.912 [INFO][5857] ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" host="ip-172-31-30-52" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.925 [INFO][5857] ipam.go 1216: Successfully claimed IPs: [192.168.50.5/26] block=192.168.50.0/26 handle="k8s-pod-network.631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" host="ip-172-31-30-52" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.925 [INFO][5857] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.5/26] handle="k8s-pod-network.631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" host="ip-172-31-30-52" Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.926 [INFO][5857] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:22:02.035177 containerd[1894]: 2024-06-25 16:22:01.926 [INFO][5857] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.50.5/26] IPv6=[] ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" HandleID="k8s-pod-network.631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Workload="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" Jun 25 16:22:02.051016 containerd[1894]: 2024-06-25 16:22:01.931 [INFO][5843] k8s.go 386: Populated endpoint ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Namespace="calico-apiserver" Pod="calico-apiserver-57bbb8976b-8djwh" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0", GenerateName:"calico-apiserver-57bbb8976b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b1efd61-0b38-4f98-b407-b19e359ea0d0", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bbb8976b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"", Pod:"calico-apiserver-57bbb8976b-8djwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali419bc8bb582", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:22:02.051016 containerd[1894]: 2024-06-25 16:22:01.931 [INFO][5843] k8s.go 387: Calico CNI using IPs: [192.168.50.5/32] ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Namespace="calico-apiserver" Pod="calico-apiserver-57bbb8976b-8djwh" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" Jun 25 16:22:02.051016 containerd[1894]: 2024-06-25 16:22:01.931 [INFO][5843] dataplane_linux.go 68: Setting the host side veth name to cali419bc8bb582 ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Namespace="calico-apiserver" Pod="calico-apiserver-57bbb8976b-8djwh" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" Jun 25 16:22:02.051016 containerd[1894]: 2024-06-25 16:22:01.956 [INFO][5843] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Namespace="calico-apiserver" Pod="calico-apiserver-57bbb8976b-8djwh" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" Jun 25 16:22:02.051016 containerd[1894]: 2024-06-25 16:22:01.961 [INFO][5843] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Namespace="calico-apiserver" Pod="calico-apiserver-57bbb8976b-8djwh" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0", GenerateName:"calico-apiserver-57bbb8976b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b1efd61-0b38-4f98-b407-b19e359ea0d0", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bbb8976b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-52", ContainerID:"631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1", Pod:"calico-apiserver-57bbb8976b-8djwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali419bc8bb582", MAC:"42:d6:a1:93:f2:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:22:02.051016 containerd[1894]: 2024-06-25 16:22:02.021 [INFO][5843] k8s.go 500: Wrote updated endpoint to datastore ContainerID="631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1" Namespace="calico-apiserver" Pod="calico-apiserver-57bbb8976b-8djwh" WorkloadEndpoint="ip--172--31--30--52-k8s-calico--apiserver--57bbb8976b--8djwh-eth0" Jun 25 16:22:02.197314 containerd[1894]: time="2024-06-25T16:22:02.196766935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:22:02.197314 containerd[1894]: time="2024-06-25T16:22:02.197055376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:02.197314 containerd[1894]: time="2024-06-25T16:22:02.197099108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:22:02.197314 containerd[1894]: time="2024-06-25T16:22:02.197115603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:02.254913 systemd[1]: run-containerd-runc-k8s.io-631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1-runc.OtFmzh.mount: Deactivated successfully. Jun 25 16:22:02.342000 audit[5923]: NETFILTER_CFG table=filter:117 family=2 entries=55 op=nft_register_chain pid=5923 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:22:02.342000 audit[5923]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7fff182d1d10 a2=0 a3=7fff182d1cfc items=0 ppid=4531 pid=5923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:02.342000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:22:02.384608 containerd[1894]: time="2024-06-25T16:22:02.384553976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bbb8976b-8djwh,Uid:8b1efd61-0b38-4f98-b407-b19e359ea0d0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1\"" Jun 25 16:22:02.387389 containerd[1894]: time="2024-06-25T16:22:02.387347990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:22:03.059101 systemd-networkd[1581]: cali419bc8bb582: Gained IPv6LL Jun 25 16:22:03.363000 audit[5931]: NETFILTER_CFG table=filter:118 family=2 entries=22 op=nft_register_rule pid=5931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:03.363000 audit[5931]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffcf60509b0 a2=0 a3=7ffcf605099c items=0 ppid=3494 pid=5931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:03.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:03.367000 audit[5931]: NETFILTER_CFG table=nat:119 family=2 entries=20 op=nft_register_rule pid=5931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:03.367000 audit[5931]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcf60509b0 a2=0 a3=0 items=0 ppid=3494 pid=5931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:03.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:03.405000 audit[5933]: NETFILTER_CFG table=filter:120 family=2 entries=34 op=nft_register_rule pid=5933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:03.405000 audit[5933]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffdc166c0e0 a2=0 a3=7ffdc166c0cc items=0 ppid=3494 pid=5933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:03.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:03.410000 audit[5933]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=5933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:03.410000 audit[5933]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdc166c0e0 a2=0 a3=0 items=0 ppid=3494 pid=5933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:03.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:03.425106 sshd[5833]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:03.425000 audit[5833]: USER_END pid=5833 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:03.427000 audit[5833]: CRED_DISP pid=5833 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:03.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.52:22-139.178.89.65:53054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:03.453907 systemd[1]: Started sshd@19-172.31.30.52:22-139.178.89.65:53054.service - OpenSSH per-connection server daemon (139.178.89.65:53054). Jun 25 16:22:03.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.30.52:22-139.178.89.65:53048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:03.465738 systemd[1]: sshd@18-172.31.30.52:22-139.178.89.65:53048.service: Deactivated successfully. Jun 25 16:22:03.469865 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:22:03.472600 systemd-logind[1885]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:22:03.499708 systemd-logind[1885]: Removed session 19. Jun 25 16:22:03.729000 audit[5935]: USER_ACCT pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:03.731247 kernel: kauditd_printk_skb: 51 callbacks suppressed Jun 25 16:22:03.731397 kernel: audit: type=1101 audit(1719332523.729:435): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:03.732761 sshd[5935]: Accepted publickey for core from 139.178.89.65 port 53054 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:03.734180 kernel: audit: type=1103 audit(1719332523.732:436): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:03.732000 audit[5935]: CRED_ACQ pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:03.735350 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:03.740415 kernel: audit: type=1006 audit(1719332523.732:437): pid=5935 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jun 25 16:22:03.746141 kernel: audit: type=1300 audit(1719332523.732:437): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe9526680 a2=3 a3=7f8e6de79480 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:03.732000 audit[5935]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe9526680 a2=3 a3=7f8e6de79480 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:03.754882 kernel: audit: type=1327 audit(1719332523.732:437): proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:03.732000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:03.778275 systemd-logind[1885]: New session 20 of user core. Jun 25 16:22:03.780313 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:22:03.816000 audit[5935]: USER_START pid=5935 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:03.828236 kernel: audit: type=1105 audit(1719332523.816:438): pid=5935 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:03.837091 kernel: audit: type=1103 audit(1719332523.827:439): pid=5939 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:03.827000 audit[5939]: CRED_ACQ pid=5939 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:05.707142 kernel: audit: type=1106 audit(1719332525.688:440): pid=5935 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:05.712743 kernel: audit: type=1104 audit(1719332525.688:441): pid=5935 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:05.712818 kernel: audit: type=1131 audit(1719332525.693:442): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.52:22-139.178.89.65:53054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:05.688000 audit[5935]: USER_END pid=5935 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:05.688000 audit[5935]: CRED_DISP pid=5935 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:05.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.52:22-139.178.89.65:53054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:05.693798 systemd[1]: sshd@19-172.31.30.52:22-139.178.89.65:53054.service: Deactivated successfully. Jun 25 16:22:05.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.52:22-139.178.89.65:53066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:05.687460 sshd[5935]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:05.695735 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:22:05.699609 systemd-logind[1885]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:22:05.702157 systemd-logind[1885]: Removed session 20. Jun 25 16:22:05.724776 systemd[1]: Started sshd@20-172.31.30.52:22-139.178.89.65:53066.service - OpenSSH per-connection server daemon (139.178.89.65:53066). Jun 25 16:22:05.997000 audit[5951]: USER_ACCT pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:05.998408 sshd[5951]: Accepted publickey for core from 139.178.89.65 port 53066 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:05.999000 audit[5951]: CRED_ACQ pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:05.999000 audit[5951]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2032c8d0 a2=3 a3=7f278a35e480 items=0 ppid=1 pid=5951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:05.999000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:06.001657 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:06.008719 systemd-logind[1885]: New session 21 of user core. Jun 25 16:22:06.013464 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:22:06.020000 audit[5951]: USER_START pid=5951 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:06.024000 audit[5972]: CRED_ACQ pid=5972 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:06.324843 sshd[5951]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:06.325000 audit[5951]: USER_END pid=5951 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:06.325000 audit[5951]: CRED_DISP pid=5951 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:06.328437 systemd[1]: sshd@20-172.31.30.52:22-139.178.89.65:53066.service: Deactivated successfully. Jun 25 16:22:06.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.52:22-139.178.89.65:53066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:06.329810 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:22:06.332953 systemd-logind[1885]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:22:06.335748 systemd-logind[1885]: Removed session 21. Jun 25 16:22:06.786043 containerd[1894]: time="2024-06-25T16:22:06.724601716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:22:06.799401 containerd[1894]: time="2024-06-25T16:22:06.799316397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.406283454s" Jun 25 16:22:06.799401 containerd[1894]: time="2024-06-25T16:22:06.799394044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:22:06.854342 containerd[1894]: time="2024-06-25T16:22:06.854278338Z" level=info msg="CreateContainer within sandbox \"631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:22:06.878482 containerd[1894]: time="2024-06-25T16:22:06.852617014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:06.895787 containerd[1894]: time="2024-06-25T16:22:06.889191164Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:06.895787 containerd[1894]: time="2024-06-25T16:22:06.890600426Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:06.902000 audit[5987]: NETFILTER_CFG table=filter:122 family=2 entries=34 op=nft_register_rule pid=5987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:06.902000 audit[5987]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7fffd3ef7730 a2=0 a3=7fffd3ef771c items=0 ppid=3494 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:06.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:06.903000 audit[5987]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=5987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:06.903000 audit[5987]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd3ef7730 a2=0 a3=0 items=0 ppid=3494 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:06.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:06.927752 containerd[1894]: time="2024-06-25T16:22:06.927701896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:06.949160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095367055.mount: Deactivated successfully. Jun 25 16:22:06.996986 containerd[1894]: time="2024-06-25T16:22:06.996937590Z" level=info msg="CreateContainer within sandbox \"631bf2066b51c602b18b5ac3dcc113c80bfb81468746084b32099c485c3367a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"71e5531c71bd4329dcb97fb0d34a1faaebefdc41666dbb79c89b9b2b939f1190\"" Jun 25 16:22:06.998913 containerd[1894]: time="2024-06-25T16:22:06.997772886Z" level=info msg="StartContainer for \"71e5531c71bd4329dcb97fb0d34a1faaebefdc41666dbb79c89b9b2b939f1190\"" Jun 25 16:22:07.158817 systemd[1]: run-containerd-runc-k8s.io-71e5531c71bd4329dcb97fb0d34a1faaebefdc41666dbb79c89b9b2b939f1190-runc.ogrLAz.mount: Deactivated successfully. Jun 25 16:22:07.302298 containerd[1894]: time="2024-06-25T16:22:07.302243882Z" level=info msg="StartContainer for \"71e5531c71bd4329dcb97fb0d34a1faaebefdc41666dbb79c89b9b2b939f1190\" returns successfully" Jun 25 16:22:07.528000 audit[6052]: NETFILTER_CFG table=filter:124 family=2 entries=34 op=nft_register_rule pid=6052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:07.528000 audit[6052]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffe40092b90 a2=0 a3=7ffe40092b7c items=0 ppid=3494 pid=6052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:07.528000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:07.529000 audit[6052]: NETFILTER_CFG table=nat:125 family=2 entries=20 op=nft_register_rule pid=6052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:07.529000 audit[6052]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe40092b90 a2=0 a3=0 items=0 ppid=3494 pid=6052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:07.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:08.896590 kubelet[3353]: I0625 16:22:08.896551 3353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57bbb8976b-8djwh" podStartSLOduration=5.483090439 podCreationTimestamp="2024-06-25 16:21:59 +0000 UTC" firstStartedPulling="2024-06-25 16:22:02.386595787 +0000 UTC m=+96.147799549" lastFinishedPulling="2024-06-25 16:22:06.799995952 +0000 UTC m=+100.561199710" observedRunningTime="2024-06-25 16:22:07.48831448 +0000 UTC m=+101.249518251" watchObservedRunningTime="2024-06-25 16:22:08.8964906 +0000 UTC m=+102.657694371" Jun 25 16:22:08.950000 audit[6054]: NETFILTER_CFG table=filter:126 family=2 entries=33 op=nft_register_rule pid=6054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:08.951759 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:22:08.951859 kernel: audit: type=1325 audit(1719332528.950:456): table=filter:126 family=2 entries=33 op=nft_register_rule pid=6054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:08.954096 kernel: audit: type=1300 audit(1719332528.950:456): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff3c628ce0 a2=0 a3=7fff3c628ccc items=0 ppid=3494 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:08.950000 audit[6054]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff3c628ce0 a2=0 a3=7fff3c628ccc items=0 ppid=3494 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:08.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:08.961390 kernel: audit: type=1327 audit(1719332528.950:456): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:08.952000 audit[6054]: NETFILTER_CFG table=nat:127 family=2 entries=27 op=nft_register_chain pid=6054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:08.952000 audit[6054]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff3c628ce0 a2=0 a3=0 items=0 ppid=3494 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:08.967502 kernel: audit: type=1325 audit(1719332528.952:457): table=nat:127 family=2 entries=27 op=nft_register_chain pid=6054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:08.967624 kernel: audit: type=1300 audit(1719332528.952:457): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff3c628ce0 a2=0 a3=0 items=0 ppid=3494 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:08.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:08.969421 kernel: audit: type=1327 audit(1719332528.952:457): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:11.355666 systemd[1]: Started sshd@21-172.31.30.52:22-139.178.89.65:34934.service - OpenSSH per-connection server daemon (139.178.89.65:34934). Jun 25 16:22:11.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.52:22-139.178.89.65:34934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:11.359113 kernel: audit: type=1130 audit(1719332531.355:458): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.52:22-139.178.89.65:34934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:11.571000 audit[6057]: USER_ACCT pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:11.573000 audit[6057]: CRED_ACQ pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:11.577030 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:11.579098 kernel: audit: type=1101 audit(1719332531.571:459): pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:11.579198 kernel: audit: type=1103 audit(1719332531.573:460): pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:11.579233 kernel: audit: type=1006 audit(1719332531.573:461): pid=6057 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 16:22:11.579621 sshd[6057]: Accepted publickey for core from 139.178.89.65 port 34934 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:11.573000 audit[6057]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9de94ec0 a2=3 a3=7fcd4c3d5480 items=0 ppid=1 pid=6057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.573000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:11.590231 systemd-logind[1885]: New session 22 of user core. Jun 25 16:22:11.595939 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:22:11.612000 audit[6057]: USER_START pid=6057 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:11.616000 audit[6060]: CRED_ACQ pid=6060 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:11.974753 sshd[6057]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:11.978000 audit[6057]: USER_END pid=6057 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:11.982000 audit[6057]: CRED_DISP pid=6057 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:11.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.52:22-139.178.89.65:34934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:11.993901 systemd-logind[1885]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:22:11.996367 systemd[1]: sshd@21-172.31.30.52:22-139.178.89.65:34934.service: Deactivated successfully. Jun 25 16:22:12.000480 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:22:12.002492 systemd-logind[1885]: Removed session 22. Jun 25 16:22:14.139851 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:22:14.140095 kernel: audit: type=1325 audit(1719332534.136:467): table=filter:128 family=2 entries=20 op=nft_register_rule pid=6078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:14.136000 audit[6078]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=6078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:14.136000 audit[6078]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc8b0076c0 a2=0 a3=7ffc8b0076ac items=0 ppid=3494 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:14.136000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:14.144835 kernel: audit: type=1300 audit(1719332534.136:467): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc8b0076c0 a2=0 a3=7ffc8b0076ac items=0 ppid=3494 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:14.144913 kernel: audit: type=1327 audit(1719332534.136:467): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:14.139000 audit[6078]: NETFILTER_CFG table=nat:129 family=2 entries=106 op=nft_register_chain pid=6078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:14.139000 audit[6078]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc8b0076c0 a2=0 a3=7ffc8b0076ac items=0 ppid=3494 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:14.149645 kernel: audit: type=1325 audit(1719332534.139:468): table=nat:129 family=2 entries=106 op=nft_register_chain pid=6078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:14.149751 kernel: audit: type=1300 audit(1719332534.139:468): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc8b0076c0 a2=0 a3=7ffc8b0076ac items=0 ppid=3494 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:14.139000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:14.151506 kernel: audit: type=1327 audit(1719332534.139:468): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:17.005868 systemd[1]: Started sshd@22-172.31.30.52:22-139.178.89.65:34538.service - OpenSSH per-connection server daemon (139.178.89.65:34538). Jun 25 16:22:17.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.52:22-139.178.89.65:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:17.012105 kernel: audit: type=1130 audit(1719332537.005:469): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.52:22-139.178.89.65:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:17.231000 audit[6080]: USER_ACCT pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:17.246824 sshd[6080]: Accepted publickey for core from 139.178.89.65 port 34538 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:17.248084 kernel: audit: type=1101 audit(1719332537.231:470): pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:17.250000 audit[6080]: CRED_ACQ pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:17.251979 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:17.255160 kernel: audit: type=1103 audit(1719332537.250:471): pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:17.255299 kernel: audit: type=1006 audit(1719332537.250:472): pid=6080 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:22:17.250000 audit[6080]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc86019ea0 a2=3 a3=7fef70cd0480 items=0 ppid=1 pid=6080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:17.250000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:17.270706 systemd-logind[1885]: New session 23 of user core. Jun 25 16:22:17.275844 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:22:17.290000 audit[6080]: USER_START pid=6080 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:17.292000 audit[6083]: CRED_ACQ pid=6083 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:17.563309 sshd[6080]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:17.564000 audit[6080]: USER_END pid=6080 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:17.566000 audit[6080]: CRED_DISP pid=6080 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:17.569448 systemd[1]: sshd@22-172.31.30.52:22-139.178.89.65:34538.service: Deactivated successfully. Jun 25 16:22:17.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.52:22-139.178.89.65:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:17.570839 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:22:17.571320 systemd-logind[1885]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:22:17.575114 systemd-logind[1885]: Removed session 23. Jun 25 16:22:22.593162 systemd[1]: Started sshd@23-172.31.30.52:22-139.178.89.65:34548.service - OpenSSH per-connection server daemon (139.178.89.65:34548). Jun 25 16:22:22.597904 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:22:22.598031 kernel: audit: type=1130 audit(1719332542.593:478): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.52:22-139.178.89.65:34548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.52:22-139.178.89.65:34548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.795000 audit[6095]: USER_ACCT pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:22.800249 kernel: audit: type=1101 audit(1719332542.795:479): pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:22.800321 sshd[6095]: Accepted publickey for core from 139.178.89.65 port 34548 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:22.800329 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:22.795000 audit[6095]: CRED_ACQ pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:22.805442 kernel: audit: type=1103 audit(1719332542.795:480): pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:22.812572 kernel: audit: type=1006 audit(1719332542.795:481): pid=6095 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:22:22.812707 kernel: audit: type=1300 audit(1719332542.795:481): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe97117230 a2=3 a3=7fb18ea5b480 items=0 ppid=1 pid=6095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:22.795000 audit[6095]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe97117230 a2=3 a3=7fb18ea5b480 items=0 ppid=1 pid=6095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:22.811989 systemd-logind[1885]: New session 24 of user core. Jun 25 16:22:22.815563 kernel: audit: type=1327 audit(1719332542.795:481): proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:22.795000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:22.815682 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:22:22.842000 audit[6095]: USER_START pid=6095 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:22.848275 kernel: audit: type=1105 audit(1719332542.842:482): pid=6095 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:22.847000 audit[6098]: CRED_ACQ pid=6098 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:22.853233 kernel: audit: type=1103 audit(1719332542.847:483): pid=6098 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:23.164714 sshd[6095]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:23.173479 kernel: audit: type=1106 audit(1719332543.166:484): pid=6095 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:23.166000 audit[6095]: USER_END pid=6095 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:23.178040 kernel: audit: type=1104 audit(1719332543.166:485): pid=6095 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:23.166000 audit[6095]: CRED_DISP pid=6095 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:23.181045 systemd[1]: sshd@23-172.31.30.52:22-139.178.89.65:34548.service: Deactivated successfully. Jun 25 16:22:23.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.52:22-139.178.89.65:34548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:23.182456 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:22:23.183802 systemd-logind[1885]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:22:23.185022 systemd-logind[1885]: Removed session 24. Jun 25 16:22:28.172747 systemd[1]: run-containerd-runc-k8s.io-eb052608b087ba9dba8d4be30c6bbf6cd47de12379e9581cd7cd4cef43f20d3d-runc.FY6hGr.mount: Deactivated successfully. Jun 25 16:22:28.201446 systemd[1]: Started sshd@24-172.31.30.52:22-139.178.89.65:36252.service - OpenSSH per-connection server daemon (139.178.89.65:36252). Jun 25 16:22:28.206965 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:22:28.207154 kernel: audit: type=1130 audit(1719332548.201:487): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.52:22-139.178.89.65:36252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:28.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.52:22-139.178.89.65:36252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:28.406000 audit[6131]: USER_ACCT pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.409305 sshd[6131]: Accepted publickey for core from 139.178.89.65 port 36252 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:28.409000 audit[6131]: CRED_ACQ pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.413593 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:28.417085 kernel: audit: type=1101 audit(1719332548.406:488): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.417229 kernel: audit: type=1103 audit(1719332548.409:489): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.417283 kernel: audit: type=1006 audit(1719332548.409:490): pid=6131 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:22:28.409000 audit[6131]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe68f107d0 a2=3 a3=7f6e27d3e480 items=0 ppid=1 pid=6131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:28.427439 kernel: audit: type=1300 audit(1719332548.409:490): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe68f107d0 a2=3 a3=7f6e27d3e480 items=0 ppid=1 pid=6131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:28.427536 kernel: audit: type=1327 audit(1719332548.409:490): proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:28.409000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:28.431035 systemd-logind[1885]: New session 25 of user core. Jun 25 16:22:28.432503 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:22:28.438000 audit[6131]: USER_START pid=6131 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.447609 kernel: audit: type=1105 audit(1719332548.438:491): pid=6131 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.447724 kernel: audit: type=1103 audit(1719332548.441:492): pid=6138 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.441000 audit[6138]: CRED_ACQ pid=6138 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.755018 sshd[6131]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:28.762000 audit[6131]: USER_END pid=6131 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.777196 kernel: audit: type=1106 audit(1719332548.762:493): pid=6131 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.777378 kernel: audit: type=1104 audit(1719332548.763:494): pid=6131 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.763000 audit[6131]: CRED_DISP pid=6131 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:28.771701 systemd[1]: sshd@24-172.31.30.52:22-139.178.89.65:36252.service: Deactivated successfully. Jun 25 16:22:28.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.52:22-139.178.89.65:36252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:28.785947 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:22:28.787702 systemd-logind[1885]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:22:28.792831 systemd-logind[1885]: Removed session 25. Jun 25 16:22:33.788213 systemd[1]: Started sshd@25-172.31.30.52:22-139.178.89.65:36264.service - OpenSSH per-connection server daemon (139.178.89.65:36264). Jun 25 16:22:33.792797 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:22:33.792879 kernel: audit: type=1130 audit(1719332553.787:496): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.30.52:22-139.178.89.65:36264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:33.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.30.52:22-139.178.89.65:36264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:33.939000 audit[6154]: USER_ACCT pid=6154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:33.939000 audit[6154]: CRED_ACQ pid=6154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:33.944163 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:33.945757 sshd[6154]: Accepted publickey for core from 139.178.89.65 port 36264 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:33.948792 kernel: audit: type=1101 audit(1719332553.939:497): pid=6154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:33.949027 kernel: audit: type=1103 audit(1719332553.939:498): pid=6154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:33.949082 kernel: audit: type=1006 audit(1719332553.939:499): pid=6154 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 16:22:33.939000 audit[6154]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc8e26c50 a2=3 a3=7f29a61b2480 items=0 ppid=1 pid=6154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.939000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:33.954128 kernel: audit: type=1300 audit(1719332553.939:499): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc8e26c50 a2=3 a3=7f29a61b2480 items=0 ppid=1 pid=6154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.954264 kernel: audit: type=1327 audit(1719332553.939:499): proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:33.959660 systemd-logind[1885]: New session 26 of user core. Jun 25 16:22:33.965522 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:22:33.978498 kernel: audit: type=1105 audit(1719332553.971:500): pid=6154 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:33.971000 audit[6154]: USER_START pid=6154 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:33.978000 audit[6157]: CRED_ACQ pid=6157 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:33.982089 kernel: audit: type=1103 audit(1719332553.978:501): pid=6157 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:34.220221 sshd[6154]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:34.220000 audit[6154]: USER_END pid=6154 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:34.224571 systemd[1]: sshd@25-172.31.30.52:22-139.178.89.65:36264.service: Deactivated successfully. Jun 25 16:22:34.229669 kernel: audit: type=1106 audit(1719332554.220:502): pid=6154 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:34.229758 kernel: audit: type=1104 audit(1719332554.220:503): pid=6154 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:34.220000 audit[6154]: CRED_DISP pid=6154 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:34.226295 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:22:34.228865 systemd-logind[1885]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:22:34.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.30.52:22-139.178.89.65:36264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:34.231144 systemd-logind[1885]: Removed session 26. Jun 25 16:22:39.250090 systemd[1]: Started sshd@26-172.31.30.52:22-139.178.89.65:43316.service - OpenSSH per-connection server daemon (139.178.89.65:43316). Jun 25 16:22:39.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.30.52:22-139.178.89.65:43316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:39.254719 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:22:39.254922 kernel: audit: type=1130 audit(1719332559.249:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.30.52:22-139.178.89.65:43316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:39.447000 audit[6196]: USER_ACCT pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.449423 sshd[6196]: Accepted publickey for core from 139.178.89.65 port 43316 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:39.448000 audit[6196]: CRED_ACQ pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.455579 kernel: audit: type=1101 audit(1719332559.447:506): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.455825 kernel: audit: type=1103 audit(1719332559.448:507): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.455887 kernel: audit: type=1006 audit(1719332559.448:508): pid=6196 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 16:22:39.457171 sshd[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:39.457707 kernel: audit: type=1300 audit(1719332559.448:508): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc994b1900 a2=3 a3=7f62db44b480 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:39.448000 audit[6196]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc994b1900 a2=3 a3=7f62db44b480 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:39.460122 kernel: audit: type=1327 audit(1719332559.448:508): proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:39.448000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:39.467440 systemd-logind[1885]: New session 27 of user core. Jun 25 16:22:39.472519 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:22:39.486000 audit[6196]: USER_START pid=6196 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.493346 kernel: audit: type=1105 audit(1719332559.486:509): pid=6196 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.494000 audit[6202]: CRED_ACQ pid=6202 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.499159 kernel: audit: type=1103 audit(1719332559.494:510): pid=6202 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.764260 sshd[6196]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:39.764000 audit[6196]: USER_END pid=6196 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.770662 kernel: audit: type=1106 audit(1719332559.764:511): pid=6196 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.770000 audit[6196]: CRED_DISP pid=6196 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.775750 systemd-logind[1885]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:22:39.776079 kernel: audit: type=1104 audit(1719332559.770:512): pid=6196 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:39.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.30.52:22-139.178.89.65:43316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:39.776964 systemd[1]: sshd@26-172.31.30.52:22-139.178.89.65:43316.service: Deactivated successfully. Jun 25 16:22:39.778207 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:22:39.780253 systemd-logind[1885]: Removed session 27. Jun 25 16:22:44.799457 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:22:44.799601 kernel: audit: type=1130 audit(1719332564.794:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.30.52:22-139.178.89.65:43322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:44.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.30.52:22-139.178.89.65:43322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:44.795770 systemd[1]: Started sshd@27-172.31.30.52:22-139.178.89.65:43322.service - OpenSSH per-connection server daemon (139.178.89.65:43322). Jun 25 16:22:44.963000 audit[6229]: USER_ACCT pid=6229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:44.965041 sshd[6229]: Accepted publickey for core from 139.178.89.65 port 43322 ssh2: RSA SHA256:YLA6YdAAMbsq13yWE4JtkMTieUXkKVpRlVMQduUk54Q Jun 25 16:22:44.968088 kernel: audit: type=1101 audit(1719332564.963:515): pid=6229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:44.967000 audit[6229]: CRED_ACQ pid=6229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:44.969203 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:44.973144 kernel: audit: type=1103 audit(1719332564.967:516): pid=6229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:44.973225 kernel: audit: type=1006 audit(1719332564.967:517): pid=6229 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 16:22:44.973283 kernel: audit: type=1300 audit(1719332564.967:517): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1d0f7d80 a2=3 a3=7f9c16b66480 items=0 ppid=1 pid=6229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:44.967000 audit[6229]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1d0f7d80 a2=3 a3=7f9c16b66480 items=0 ppid=1 pid=6229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:44.976889 kernel: audit: type=1327 audit(1719332564.967:517): proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:44.967000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:44.979187 systemd-logind[1885]: New session 28 of user core. Jun 25 16:22:44.986501 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 16:22:44.994000 audit[6229]: USER_START pid=6229 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:44.999000 audit[6232]: CRED_ACQ pid=6232 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:45.005130 kernel: audit: type=1105 audit(1719332564.994:518): pid=6229 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:45.005281 kernel: audit: type=1103 audit(1719332564.999:519): pid=6232 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:45.305989 sshd[6229]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:45.314000 audit[6229]: USER_END pid=6229 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:45.325432 kernel: audit: type=1106 audit(1719332565.314:520): pid=6229 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:45.325599 kernel: audit: type=1104 audit(1719332565.314:521): pid=6229 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:45.314000 audit[6229]: CRED_DISP pid=6229 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:22:45.337330 systemd[1]: sshd@27-172.31.30.52:22-139.178.89.65:43322.service: Deactivated successfully. Jun 25 16:22:45.340253 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 16:22:45.341825 systemd-logind[1885]: Session 28 logged out. Waiting for processes to exit. Jun 25 16:22:45.345756 systemd-logind[1885]: Removed session 28. Jun 25 16:22:45.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.30.52:22-139.178.89.65:43322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:58.204886 systemd[1]: run-containerd-runc-k8s.io-eb052608b087ba9dba8d4be30c6bbf6cd47de12379e9581cd7cd4cef43f20d3d-runc.OWILjM.mount: Deactivated successfully. Jun 25 16:22:58.982989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cfd38343a253e5593d742c07b1d23fa93af9ef669bcf492d088d06d9404dd6d-rootfs.mount: Deactivated successfully. Jun 25 16:22:58.992095 containerd[1894]: time="2024-06-25T16:22:58.987487679Z" level=info msg="shim disconnected" id=7cfd38343a253e5593d742c07b1d23fa93af9ef669bcf492d088d06d9404dd6d namespace=k8s.io Jun 25 16:22:58.992641 containerd[1894]: time="2024-06-25T16:22:58.992093552Z" level=warning msg="cleaning up after shim disconnected" id=7cfd38343a253e5593d742c07b1d23fa93af9ef669bcf492d088d06d9404dd6d namespace=k8s.io Jun 25 16:22:58.992641 containerd[1894]: time="2024-06-25T16:22:58.992115076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:22:59.491711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2364bb4fdd4cee327da357a61d0f69c0d5a63c11b5fc8bc74ac7466aab61a62-rootfs.mount: Deactivated successfully. Jun 25 16:22:59.494407 containerd[1894]: time="2024-06-25T16:22:59.494345244Z" level=info msg="shim disconnected" id=e2364bb4fdd4cee327da357a61d0f69c0d5a63c11b5fc8bc74ac7466aab61a62 namespace=k8s.io Jun 25 16:22:59.494407 containerd[1894]: time="2024-06-25T16:22:59.494404622Z" level=warning msg="cleaning up after shim disconnected" id=e2364bb4fdd4cee327da357a61d0f69c0d5a63c11b5fc8bc74ac7466aab61a62 namespace=k8s.io Jun 25 16:22:59.494742 containerd[1894]: time="2024-06-25T16:22:59.494416652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:22:59.638655 kubelet[3353]: I0625 16:22:59.638606 3353 scope.go:117] "RemoveContainer" containerID="7cfd38343a253e5593d742c07b1d23fa93af9ef669bcf492d088d06d9404dd6d" Jun 25 16:22:59.639312 kubelet[3353]: I0625 16:22:59.639216 3353 scope.go:117] "RemoveContainer" containerID="e2364bb4fdd4cee327da357a61d0f69c0d5a63c11b5fc8bc74ac7466aab61a62" Jun 25 16:22:59.647105 containerd[1894]: time="2024-06-25T16:22:59.647042460Z" level=info msg="CreateContainer within sandbox \"64627de833179c540adf0bc9b77acd715669f3d8b352ee7dd6a2d38d3221d8f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 16:22:59.648788 containerd[1894]: time="2024-06-25T16:22:59.648732840Z" level=info msg="CreateContainer within sandbox \"75620bea4148b1d6e9f3dd7d0209be633feb7b422974ff0ab092739ad23952f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 16:22:59.686940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1743715244.mount: Deactivated successfully. Jun 25 16:22:59.696800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903454678.mount: Deactivated successfully. Jun 25 16:22:59.711457 containerd[1894]: time="2024-06-25T16:22:59.711383362Z" level=info msg="CreateContainer within sandbox \"64627de833179c540adf0bc9b77acd715669f3d8b352ee7dd6a2d38d3221d8f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7de18588f864594760a40d4c489802b756e5db5bbac2c0aa4c0301902eb9ae4a\"" Jun 25 16:22:59.712424 containerd[1894]: time="2024-06-25T16:22:59.712372969Z" level=info msg="StartContainer for \"7de18588f864594760a40d4c489802b756e5db5bbac2c0aa4c0301902eb9ae4a\"" Jun 25 16:22:59.724911 containerd[1894]: time="2024-06-25T16:22:59.724865975Z" level=info msg="CreateContainer within sandbox \"75620bea4148b1d6e9f3dd7d0209be633feb7b422974ff0ab092739ad23952f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a315df8d591bba285050d39678c171fa1c88d1eb2fe5120a73fdbdb3af9022ba\"" Jun 25 16:22:59.726388 containerd[1894]: time="2024-06-25T16:22:59.726283205Z" level=info msg="StartContainer for \"a315df8d591bba285050d39678c171fa1c88d1eb2fe5120a73fdbdb3af9022ba\"" Jun 25 16:22:59.874964 containerd[1894]: time="2024-06-25T16:22:59.874851054Z" level=info msg="StartContainer for \"7de18588f864594760a40d4c489802b756e5db5bbac2c0aa4c0301902eb9ae4a\" returns successfully" Jun 25 16:22:59.901938 containerd[1894]: time="2024-06-25T16:22:59.901864086Z" level=info msg="StartContainer for \"a315df8d591bba285050d39678c171fa1c88d1eb2fe5120a73fdbdb3af9022ba\" returns successfully" Jun 25 16:23:00.177571 kubelet[3353]: E0625 16:23:00.177530 3353 controller.go:193] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-30-52)" Jun 25 16:23:04.761962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a64db02d87d3eb694df6dadd1d54d1c04b4b34dc3ae3c1e9203a43017f602e64-rootfs.mount: Deactivated successfully. Jun 25 16:23:04.765661 containerd[1894]: time="2024-06-25T16:23:04.765577585Z" level=info msg="shim disconnected" id=a64db02d87d3eb694df6dadd1d54d1c04b4b34dc3ae3c1e9203a43017f602e64 namespace=k8s.io Jun 25 16:23:04.766540 containerd[1894]: time="2024-06-25T16:23:04.765659090Z" level=warning msg="cleaning up after shim disconnected" id=a64db02d87d3eb694df6dadd1d54d1c04b4b34dc3ae3c1e9203a43017f602e64 namespace=k8s.io Jun 25 16:23:04.766540 containerd[1894]: time="2024-06-25T16:23:04.765676082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:23:05.667688 kubelet[3353]: I0625 16:23:05.667593 3353 scope.go:117] "RemoveContainer" containerID="a64db02d87d3eb694df6dadd1d54d1c04b4b34dc3ae3c1e9203a43017f602e64" Jun 25 16:23:05.686685 containerd[1894]: time="2024-06-25T16:23:05.686593382Z" level=info msg="CreateContainer within sandbox \"96d04bb501b6730b506bec3cdeb0c0329a7b0d427e011c695f9b54aee9be6e08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 16:23:05.732530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556585207.mount: Deactivated successfully. Jun 25 16:23:05.738252 containerd[1894]: time="2024-06-25T16:23:05.738119783Z" level=info msg="CreateContainer within sandbox \"96d04bb501b6730b506bec3cdeb0c0329a7b0d427e011c695f9b54aee9be6e08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f5dac2b3691a2128385b7d19f7ed882f7e61de048f1f07581de45c2bb8914d24\"" Jun 25 16:23:05.739097 containerd[1894]: time="2024-06-25T16:23:05.738782484Z" level=info msg="StartContainer for \"f5dac2b3691a2128385b7d19f7ed882f7e61de048f1f07581de45c2bb8914d24\"" Jun 25 16:23:05.804935 systemd[1]: run-containerd-runc-k8s.io-f5dac2b3691a2128385b7d19f7ed882f7e61de048f1f07581de45c2bb8914d24-runc.uSUlaj.mount: Deactivated successfully. Jun 25 16:23:05.883846 systemd[1]: run-containerd-runc-k8s.io-eb052608b087ba9dba8d4be30c6bbf6cd47de12379e9581cd7cd4cef43f20d3d-runc.QkdYiz.mount: Deactivated successfully. Jun 25 16:23:05.960172 containerd[1894]: time="2024-06-25T16:23:05.958451842Z" level=info msg="StartContainer for \"f5dac2b3691a2128385b7d19f7ed882f7e61de048f1f07581de45c2bb8914d24\" returns successfully" Jun 25 16:23:07.295377 systemd[1]: run-containerd-runc-k8s.io-fbfff568badc418d60e33f210d56b9cd9cc2b1b00493da21232f3dfd4671cc83-runc.qapwm3.mount: Deactivated successfully. Jun 25 16:23:10.240129 kubelet[3353]: E0625 16:23:10.240078 3353 controller.go:193] "Failed to update lease" err="Put \"https://172.31.30.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-52?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 16:23:20.267114 kubelet[3353]: E0625 16:23:20.266657 3353 controller.go:193] "Failed to update lease" err="Put \"https://172.31.30.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-52?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"