Jun 25 16:26:01.027601 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:26:01.027621 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:26:01.027633 kernel: BIOS-provided physical RAM map: Jun 25 16:26:01.027640 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 16:26:01.027646 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 16:26:01.027653 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 16:26:01.027660 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jun 25 16:26:01.027685 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jun 25 16:26:01.027692 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 16:26:01.027700 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 16:26:01.027706 kernel: NX (Execute Disable) protection: active Jun 25 16:26:01.027712 kernel: SMBIOS 2.8 present. Jun 25 16:26:01.027718 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jun 25 16:26:01.027725 kernel: Hypervisor detected: KVM Jun 25 16:26:01.027733 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:26:01.027741 kernel: kvm-clock: using sched offset of 5691722793 cycles Jun 25 16:26:01.027748 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:26:01.027756 kernel: tsc: Detected 1996.249 MHz processor Jun 25 16:26:01.027763 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:26:01.027770 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:26:01.027777 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jun 25 16:26:01.027784 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:26:01.027791 kernel: ACPI: Early table checksum verification disabled Jun 25 16:26:01.027797 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jun 25 16:26:01.027806 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:01.027813 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:01.027820 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:01.027827 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 25 16:26:01.027834 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:01.027841 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:01.027848 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jun 25 16:26:01.027855 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jun 25 16:26:01.027863 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 25 16:26:01.027870 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jun 25 16:26:01.027877 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jun 25 16:26:01.027884 kernel: No NUMA configuration found Jun 25 16:26:01.027890 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jun 25 16:26:01.027897 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jun 25 16:26:01.027904 kernel: Zone ranges: Jun 25 16:26:01.027911 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:26:01.027935 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jun 25 16:26:01.027945 kernel: Normal empty Jun 25 16:26:01.027952 kernel: Movable zone start for each node Jun 25 16:26:01.027960 kernel: Early memory node ranges Jun 25 16:26:01.027967 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 16:26:01.027974 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jun 25 16:26:01.027981 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jun 25 16:26:01.027990 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:26:01.027997 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 16:26:01.028004 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jun 25 16:26:01.028011 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 16:26:01.028019 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:26:01.028026 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:26:01.028033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 16:26:01.028040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:26:01.028047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:26:01.028056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:26:01.028063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:26:01.028070 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:26:01.028077 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 16:26:01.028085 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 25 16:26:01.028092 kernel: Booting paravirtualized kernel on KVM Jun 25 16:26:01.028099 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:26:01.028106 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 16:26:01.028114 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jun 25 16:26:01.028123 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jun 25 16:26:01.028130 kernel: pcpu-alloc: [0] 0 1 Jun 25 16:26:01.028137 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 25 16:26:01.028144 kernel: Fallback order for Node 0: 0 Jun 25 16:26:01.028151 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jun 25 16:26:01.028159 kernel: Policy zone: DMA32 Jun 25 16:26:01.028167 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:26:01.028175 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:26:01.028184 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:26:01.028191 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:26:01.028198 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:26:01.028206 kernel: Memory: 1967132K/2096620K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 129228K reserved, 0K cma-reserved) Jun 25 16:26:01.028214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 16:26:01.028221 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:26:01.028228 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:26:01.028235 kernel: Dynamic Preempt: voluntary Jun 25 16:26:01.028242 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:26:01.028251 kernel: rcu: RCU event tracing is enabled. Jun 25 16:26:01.028259 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 16:26:01.028266 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:26:01.028274 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:26:01.028281 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:26:01.028289 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:26:01.028296 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 16:26:01.028303 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 16:26:01.028310 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:26:01.028320 kernel: Console: colour VGA+ 80x25 Jun 25 16:26:01.028327 kernel: printk: console [tty0] enabled Jun 25 16:26:01.028334 kernel: printk: console [ttyS0] enabled Jun 25 16:26:01.028341 kernel: ACPI: Core revision 20220331 Jun 25 16:26:01.028349 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:26:01.028356 kernel: x2apic enabled Jun 25 16:26:01.028363 kernel: Switched APIC routing to physical x2apic. Jun 25 16:26:01.028370 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:26:01.028378 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 16:26:01.028385 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jun 25 16:26:01.028395 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 25 16:26:01.028402 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 25 16:26:01.028409 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:26:01.028417 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:26:01.028424 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:26:01.028431 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:26:01.028439 kernel: Speculative Store Bypass: Vulnerable Jun 25 16:26:01.028446 kernel: x86/fpu: x87 FPU will use FXSAVE Jun 25 16:26:01.028453 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:26:01.028462 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:26:01.028469 kernel: LSM: Security Framework initializing Jun 25 16:26:01.028476 kernel: SELinux: Initializing. Jun 25 16:26:01.028483 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:26:01.028491 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:26:01.028498 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jun 25 16:26:01.028506 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:26:01.028513 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:26:01.028530 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:26:01.028537 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:26:01.028545 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:26:01.028552 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:26:01.028561 kernel: Performance Events: AMD PMU driver. Jun 25 16:26:01.028569 kernel: ... version: 0 Jun 25 16:26:01.028576 kernel: ... bit width: 48 Jun 25 16:26:01.028584 kernel: ... generic registers: 4 Jun 25 16:26:01.028592 kernel: ... value mask: 0000ffffffffffff Jun 25 16:26:01.028601 kernel: ... max period: 00007fffffffffff Jun 25 16:26:01.028608 kernel: ... fixed-purpose events: 0 Jun 25 16:26:01.028616 kernel: ... event mask: 000000000000000f Jun 25 16:26:01.028623 kernel: signal: max sigframe size: 1440 Jun 25 16:26:01.028631 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:26:01.028639 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:26:01.028647 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:26:01.028655 kernel: x86: Booting SMP configuration: Jun 25 16:26:01.028662 kernel: .... node #0, CPUs: #1 Jun 25 16:26:01.028672 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:26:01.028680 kernel: smpboot: Max logical packages: 2 Jun 25 16:26:01.028687 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jun 25 16:26:01.028695 kernel: devtmpfs: initialized Jun 25 16:26:01.028703 kernel: x86/mm: Memory block size: 128MB Jun 25 16:26:01.028711 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:26:01.028719 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 16:26:01.028726 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:26:01.028734 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:26:01.028742 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:26:01.028752 kernel: audit: type=2000 audit(1719332760.193:1): state=initialized audit_enabled=0 res=1 Jun 25 16:26:01.028760 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:26:01.028767 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:26:01.028775 kernel: cpuidle: using governor menu Jun 25 16:26:01.028783 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:26:01.028790 kernel: dca service started, version 1.12.1 Jun 25 16:26:01.028798 kernel: PCI: Using configuration type 1 for base access Jun 25 16:26:01.028806 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:26:01.028815 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:26:01.028823 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:26:01.028831 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:26:01.028838 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:26:01.028846 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:26:01.028853 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:26:01.028861 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:26:01.028868 kernel: ACPI: Interpreter enabled Jun 25 16:26:01.028876 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 16:26:01.028884 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:26:01.028893 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:26:01.028901 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:26:01.028909 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 16:26:01.028916 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:26:01.029060 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:26:01.029149 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 16:26:01.029230 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jun 25 16:26:01.029246 kernel: acpiphp: Slot [3] registered Jun 25 16:26:01.029254 kernel: acpiphp: Slot [4] registered Jun 25 16:26:01.029262 kernel: acpiphp: Slot [5] registered Jun 25 16:26:01.029269 kernel: acpiphp: Slot [6] registered Jun 25 16:26:01.029277 kernel: acpiphp: Slot [7] registered Jun 25 16:26:01.029284 kernel: acpiphp: Slot [8] registered Jun 25 16:26:01.029292 kernel: acpiphp: Slot [9] registered Jun 25 16:26:01.029299 kernel: acpiphp: Slot [10] registered Jun 25 16:26:01.029307 kernel: acpiphp: Slot [11] registered Jun 25 16:26:01.029316 kernel: acpiphp: Slot [12] registered Jun 25 16:26:01.029324 kernel: acpiphp: Slot [13] registered Jun 25 16:26:01.029332 kernel: acpiphp: Slot [14] registered Jun 25 16:26:01.029339 kernel: acpiphp: Slot [15] registered Jun 25 16:26:01.029347 kernel: acpiphp: Slot [16] registered Jun 25 16:26:01.029354 kernel: acpiphp: Slot [17] registered Jun 25 16:26:01.029362 kernel: acpiphp: Slot [18] registered Jun 25 16:26:01.029369 kernel: acpiphp: Slot [19] registered Jun 25 16:26:01.029377 kernel: acpiphp: Slot [20] registered Jun 25 16:26:01.029386 kernel: acpiphp: Slot [21] registered Jun 25 16:26:01.029394 kernel: acpiphp: Slot [22] registered Jun 25 16:26:01.029401 kernel: acpiphp: Slot [23] registered Jun 25 16:26:01.029408 kernel: acpiphp: Slot [24] registered Jun 25 16:26:01.029416 kernel: acpiphp: Slot [25] registered Jun 25 16:26:01.029424 kernel: acpiphp: Slot [26] registered Jun 25 16:26:01.029431 kernel: acpiphp: Slot [27] registered Jun 25 16:26:01.029439 kernel: acpiphp: Slot [28] registered Jun 25 16:26:01.029447 kernel: acpiphp: Slot [29] registered Jun 25 16:26:01.029454 kernel: acpiphp: Slot [30] registered Jun 25 16:26:01.029463 kernel: acpiphp: Slot [31] registered Jun 25 16:26:01.029471 kernel: PCI host bridge to bus 0000:00 Jun 25 16:26:01.029561 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:26:01.029636 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:26:01.029710 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:26:01.029784 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 16:26:01.029856 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 16:26:01.029950 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:26:01.030061 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:26:01.030178 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:26:01.030273 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 16:26:01.030360 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jun 25 16:26:01.030443 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:26:01.030530 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:26:01.030610 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:26:01.030694 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:26:01.030787 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 16:26:01.030872 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 16:26:01.030984 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 16:26:01.031081 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 25 16:26:01.031170 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 25 16:26:01.031254 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 25 16:26:01.031337 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jun 25 16:26:01.031421 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jun 25 16:26:01.031504 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:26:01.031598 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:26:01.031698 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jun 25 16:26:01.031787 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jun 25 16:26:01.031870 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 25 16:26:01.031976 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jun 25 16:26:01.032071 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 16:26:01.032158 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 16:26:01.032241 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jun 25 16:26:01.032325 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 25 16:26:01.032425 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jun 25 16:26:01.032511 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jun 25 16:26:01.032594 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 25 16:26:01.032688 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 16:26:01.032772 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jun 25 16:26:01.032856 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 25 16:26:01.032868 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:26:01.032879 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:26:01.032887 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:26:01.032895 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:26:01.032902 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:26:01.032910 kernel: iommu: Default domain type: Translated Jun 25 16:26:01.032932 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:26:01.032941 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:26:01.034961 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:26:01.034973 kernel: PTP clock support registered Jun 25 16:26:01.034984 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:26:01.034992 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:26:01.035000 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 16:26:01.035008 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jun 25 16:26:01.035103 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 16:26:01.035188 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 16:26:01.035270 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:26:01.035282 kernel: vgaarb: loaded Jun 25 16:26:01.035293 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:26:01.035301 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:26:01.035308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:26:01.035316 kernel: pnp: PnP ACPI init Jun 25 16:26:01.035429 kernel: pnp 00:03: [dma 2] Jun 25 16:26:01.035442 kernel: pnp: PnP ACPI: found 5 devices Jun 25 16:26:01.035450 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:26:01.035458 kernel: NET: Registered PF_INET protocol family Jun 25 16:26:01.035466 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:26:01.035477 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 16:26:01.035485 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:26:01.035493 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:26:01.035501 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 16:26:01.035508 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 16:26:01.035516 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:26:01.035524 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:26:01.035532 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:26:01.035539 kernel: NET: Registered PF_XDP protocol family Jun 25 16:26:01.035615 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:26:01.035700 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:26:01.035773 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:26:01.035844 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 16:26:01.035914 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 16:26:01.040030 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 16:26:01.040116 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:26:01.040133 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:26:01.040141 kernel: Initialise system trusted keyrings Jun 25 16:26:01.040149 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 16:26:01.040157 kernel: Key type asymmetric registered Jun 25 16:26:01.040165 kernel: Asymmetric key parser 'x509' registered Jun 25 16:26:01.040173 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:26:01.040181 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:26:01.040189 kernel: io scheduler mq-deadline registered Jun 25 16:26:01.040197 kernel: io scheduler kyber registered Jun 25 16:26:01.040207 kernel: io scheduler bfq registered Jun 25 16:26:01.040214 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:26:01.040222 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 25 16:26:01.040230 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 16:26:01.040238 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 16:26:01.040246 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 16:26:01.040254 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:26:01.040261 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:26:01.040269 kernel: random: crng init done Jun 25 16:26:01.040277 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:26:01.040287 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:26:01.040294 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:26:01.040377 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 25 16:26:01.040390 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:26:01.040462 kernel: rtc_cmos 00:04: registered as rtc0 Jun 25 16:26:01.040535 kernel: rtc_cmos 00:04: setting system clock to 2024-06-25T16:26:00 UTC (1719332760) Jun 25 16:26:01.040608 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 25 16:26:01.040622 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:26:01.040630 kernel: Segment Routing with IPv6 Jun 25 16:26:01.040638 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:26:01.040645 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:26:01.040653 kernel: Key type dns_resolver registered Jun 25 16:26:01.040661 kernel: IPI shorthand broadcast: enabled Jun 25 16:26:01.040669 kernel: sched_clock: Marking stable (965377219, 125752516)->(1094834151, -3704416) Jun 25 16:26:01.040676 kernel: registered taskstats version 1 Jun 25 16:26:01.040684 kernel: Loading compiled-in X.509 certificates Jun 25 16:26:01.040692 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:26:01.040702 kernel: Key type .fscrypt registered Jun 25 16:26:01.040709 kernel: Key type fscrypt-provisioning registered Jun 25 16:26:01.040717 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:26:01.040725 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:26:01.040732 kernel: ima: No architecture policies found Jun 25 16:26:01.040740 kernel: clk: Disabling unused clocks Jun 25 16:26:01.040747 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:26:01.040755 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:26:01.040765 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:26:01.040772 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:26:01.040780 kernel: Run /init as init process Jun 25 16:26:01.040787 kernel: with arguments: Jun 25 16:26:01.040795 kernel: /init Jun 25 16:26:01.040802 kernel: with environment: Jun 25 16:26:01.040810 kernel: HOME=/ Jun 25 16:26:01.040818 kernel: TERM=linux Jun 25 16:26:01.040825 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:26:01.040835 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:26:01.040847 systemd[1]: Detected virtualization kvm. Jun 25 16:26:01.040856 systemd[1]: Detected architecture x86-64. Jun 25 16:26:01.040864 systemd[1]: Running in initrd. Jun 25 16:26:01.040873 systemd[1]: No hostname configured, using default hostname. Jun 25 16:26:01.040881 systemd[1]: Hostname set to . Jun 25 16:26:01.040890 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:26:01.040900 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:26:01.040908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:26:01.040917 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:26:01.040949 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:26:01.040958 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:26:01.040966 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:26:01.040974 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:26:01.040986 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:26:01.040994 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:26:01.041003 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:26:01.041021 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:26:01.041032 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:26:01.041040 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:26:01.041051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:26:01.041059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:26:01.041068 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:26:01.041077 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:26:01.041086 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:26:01.041094 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:26:01.041103 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:26:01.041112 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:26:01.041120 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:26:01.041131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:26:01.041140 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:26:01.041149 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:26:01.041162 systemd-journald[180]: Journal started Jun 25 16:26:01.041205 systemd-journald[180]: Runtime Journal (/run/log/journal/de8de52328094ffd9f6a5ac839821496) is 4.9M, max 39.3M, 34.4M free. Jun 25 16:26:01.009213 systemd-modules-load[181]: Inserted module 'overlay' Jun 25 16:26:01.086536 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:26:01.086558 kernel: Bridge firewalling registered Jun 25 16:26:01.086570 kernel: SCSI subsystem initialized Jun 25 16:26:01.086580 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:26:01.086596 kernel: audit: type=1130 audit(1719332761.082:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.053074 systemd-modules-load[181]: Inserted module 'br_netfilter' Jun 25 16:26:01.088215 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:26:01.093509 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:26:01.093530 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:26:01.093540 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:26:01.094200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:26:01.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.098791 systemd-modules-load[181]: Inserted module 'dm_multipath' Jun 25 16:26:01.099392 kernel: audit: type=1130 audit(1719332761.093:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.099768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:26:01.104652 kernel: audit: type=1130 audit(1719332761.098:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.107956 kernel: audit: type=1130 audit(1719332761.103:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.109185 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:26:01.110584 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:26:01.116063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:26:01.126212 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:26:01.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.127761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:26:01.131322 kernel: audit: type=1130 audit(1719332761.126:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.135955 kernel: audit: type=1130 audit(1719332761.130:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.131000 audit: BPF prog-id=6 op=LOAD Jun 25 16:26:01.137650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:26:01.138357 kernel: audit: type=1334 audit(1719332761.131:8): prog-id=6 op=LOAD Jun 25 16:26:01.145612 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:26:01.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.150975 kernel: audit: type=1130 audit(1719332761.146:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.155058 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:26:01.168386 dracut-cmdline[207]: dracut-dracut-053 Jun 25 16:26:01.170499 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:26:01.181656 systemd-resolved[204]: Positive Trust Anchors: Jun 25 16:26:01.182445 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:26:01.183264 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:26:01.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.192548 systemd-resolved[204]: Defaulting to hostname 'linux'. Jun 25 16:26:01.198301 kernel: audit: type=1130 audit(1719332761.193:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.193515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:26:01.194103 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:26:01.235961 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:26:01.250958 kernel: iscsi: registered transport (tcp) Jun 25 16:26:01.278304 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:26:01.278378 kernel: QLogic iSCSI HBA Driver Jun 25 16:26:01.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.338675 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:26:01.346041 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:26:01.435053 kernel: raid6: sse2x4 gen() 12445 MB/s Jun 25 16:26:01.452004 kernel: raid6: sse2x2 gen() 13571 MB/s Jun 25 16:26:01.469198 kernel: raid6: sse2x1 gen() 10488 MB/s Jun 25 16:26:01.469261 kernel: raid6: using algorithm sse2x2 gen() 13571 MB/s Jun 25 16:26:01.487206 kernel: raid6: .... xor() 8471 MB/s, rmw enabled Jun 25 16:26:01.487289 kernel: raid6: using ssse3x2 recovery algorithm Jun 25 16:26:01.492007 kernel: xor: measuring software checksum speed Jun 25 16:26:01.494233 kernel: prefetch64-sse : 17052 MB/sec Jun 25 16:26:01.494289 kernel: generic_sse : 15784 MB/sec Jun 25 16:26:01.495070 kernel: xor: using function: prefetch64-sse (17052 MB/sec) Jun 25 16:26:01.678009 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:26:01.698014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:26:01.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.699000 audit: BPF prog-id=7 op=LOAD Jun 25 16:26:01.699000 audit: BPF prog-id=8 op=LOAD Jun 25 16:26:01.707294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:26:01.732462 systemd-udevd[384]: Using default interface naming scheme 'v252'. Jun 25 16:26:01.737574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:26:01.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.754225 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:26:01.774543 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jun 25 16:26:01.834300 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:26:01.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.843291 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:26:01.913368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:26:01.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.995952 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jun 25 16:26:02.006835 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jun 25 16:26:02.006982 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:26:02.006995 kernel: GPT:17805311 != 41943039 Jun 25 16:26:02.007005 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:26:02.007015 kernel: GPT:17805311 != 41943039 Jun 25 16:26:02.007025 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:26:02.007042 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:26:02.028953 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (439) Jun 25 16:26:02.039954 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (430) Jun 25 16:26:02.044320 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:26:02.098499 kernel: libata version 3.00 loaded. Jun 25 16:26:02.098522 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 16:26:02.098677 kernel: scsi host0: ata_piix Jun 25 16:26:02.098808 kernel: scsi host1: ata_piix Jun 25 16:26:02.098911 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jun 25 16:26:02.098940 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jun 25 16:26:02.101698 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 16:26:02.105829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:26:02.109295 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 16:26:02.109865 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 16:26:02.121123 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:26:02.132287 disk-uuid[459]: Primary Header is updated. Jun 25 16:26:02.132287 disk-uuid[459]: Secondary Entries is updated. Jun 25 16:26:02.132287 disk-uuid[459]: Secondary Header is updated. Jun 25 16:26:02.143956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:26:02.147946 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:26:03.161000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:26:03.161220 disk-uuid[461]: The operation has completed successfully. Jun 25 16:26:03.242293 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:26:03.243438 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:26:03.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.270333 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:26:03.273510 sh[473]: Success Jun 25 16:26:03.300009 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jun 25 16:26:03.397154 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:26:03.399159 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:26:03.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.404178 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:26:03.432169 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:26:03.432298 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:26:03.433991 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:26:03.438047 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:26:03.440261 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:26:03.464308 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:26:03.466273 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:26:03.469724 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:26:03.475262 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:26:03.505072 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:03.505182 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:26:03.505214 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:26:03.531903 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:26:03.538987 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:03.551138 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:26:03.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.556729 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:26:03.691776 ignition[571]: Ignition 2.15.0 Jun 25 16:26:03.692744 ignition[571]: Stage: fetch-offline Jun 25 16:26:03.693390 ignition[571]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:03.694016 ignition[571]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 16:26:03.694898 ignition[571]: parsed url from cmdline: "" Jun 25 16:26:03.694979 ignition[571]: no config URL provided Jun 25 16:26:03.695482 ignition[571]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:26:03.696224 ignition[571]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:26:03.696811 ignition[571]: failed to fetch config: resource requires networking Jun 25 16:26:03.697783 ignition[571]: Ignition finished successfully Jun 25 16:26:03.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.699869 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:26:03.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.705105 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:26:03.705000 audit: BPF prog-id=9 op=LOAD Jun 25 16:26:03.714388 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:26:03.736390 systemd-networkd[658]: lo: Link UP Jun 25 16:26:03.736405 systemd-networkd[658]: lo: Gained carrier Jun 25 16:26:03.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.736903 systemd-networkd[658]: Enumeration completed Jun 25 16:26:03.737196 systemd-networkd[658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:26:03.737200 systemd-networkd[658]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:26:03.737300 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:26:03.738805 systemd-networkd[658]: eth0: Link UP Jun 25 16:26:03.738809 systemd-networkd[658]: eth0: Gained carrier Jun 25 16:26:03.738816 systemd-networkd[658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:26:03.739031 systemd[1]: Reached target network.target - Network. Jun 25 16:26:03.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.745252 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 16:26:03.747580 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:26:03.754770 systemd-networkd[658]: eth0: DHCPv4 address 172.24.4.182/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 25 16:26:03.757063 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:26:03.766665 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:26:03.769258 iscsid[669]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:26:03.769258 iscsid[669]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:26:03.769258 iscsid[669]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:26:03.769258 iscsid[669]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:26:03.769258 iscsid[669]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:26:03.769258 iscsid[669]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:26:03.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.771278 ignition[660]: Ignition 2.15.0 Jun 25 16:26:03.772233 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:26:03.771285 ignition[660]: Stage: fetch Jun 25 16:26:03.774362 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:26:03.771414 ignition[660]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:03.771425 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 16:26:03.771515 ignition[660]: parsed url from cmdline: "" Jun 25 16:26:03.771519 ignition[660]: no config URL provided Jun 25 16:26:03.771525 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:26:03.771534 ignition[660]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:26:03.771636 ignition[660]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 25 16:26:03.772349 ignition[660]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 25 16:26:03.772380 ignition[660]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 25 16:26:03.789025 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:26:03.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:03.789661 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:26:03.790693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:26:03.791904 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:26:03.809121 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:26:03.818978 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:26:03.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.094329 ignition[660]: GET result: OK Jun 25 16:26:04.094531 ignition[660]: parsing config with SHA512: 3406f29dc38aff6e67961bc48c45c5350c105ca4b85a4437aae78837b9784f6e7f7da5fbf0ad2a1e166d0c42240d311dae02470195226317e68ced7f2abbab8b Jun 25 16:26:04.107592 unknown[660]: fetched base config from "system" Jun 25 16:26:04.107622 unknown[660]: fetched base config from "system" Jun 25 16:26:04.108696 ignition[660]: fetch: fetch complete Jun 25 16:26:04.107639 unknown[660]: fetched user config from "openstack" Jun 25 16:26:04.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.108709 ignition[660]: fetch: fetch passed Jun 25 16:26:04.112473 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 16:26:04.108812 ignition[660]: Ignition finished successfully Jun 25 16:26:04.124284 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:26:04.156487 ignition[683]: Ignition 2.15.0 Jun 25 16:26:04.156517 ignition[683]: Stage: kargs Jun 25 16:26:04.156782 ignition[683]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:04.156812 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 16:26:04.159966 ignition[683]: kargs: kargs passed Jun 25 16:26:04.160083 ignition[683]: Ignition finished successfully Jun 25 16:26:04.164256 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:26:04.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.170700 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:26:04.200635 ignition[689]: Ignition 2.15.0 Jun 25 16:26:04.200665 ignition[689]: Stage: disks Jun 25 16:26:04.200870 ignition[689]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:04.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.205221 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:26:04.200892 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 16:26:04.206791 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:26:04.202879 ignition[689]: disks: disks passed Jun 25 16:26:04.208218 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:26:04.202993 ignition[689]: Ignition finished successfully Jun 25 16:26:04.210176 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:26:04.212051 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:26:04.213993 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:26:04.223740 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:26:04.255855 systemd-fsck[697]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 25 16:26:04.266887 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:26:04.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.277187 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:26:04.438008 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:26:04.438522 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:26:04.439189 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:26:04.448057 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:26:04.450028 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:26:04.451596 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:26:04.452418 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 25 16:26:04.458239 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:26:04.458276 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:26:04.462384 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:26:04.474232 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:26:04.486198 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (703) Jun 25 16:26:04.491117 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:04.491226 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:26:04.493895 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:26:04.525266 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:26:04.585343 initrd-setup-root[730]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:26:04.592658 initrd-setup-root[737]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:26:04.599271 initrd-setup-root[744]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:26:04.605624 initrd-setup-root[751]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:26:04.632710 coreos-metadata[705]: Jun 25 16:26:04.632 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 16:26:04.652304 coreos-metadata[705]: Jun 25 16:26:04.652 INFO Fetch successful Jun 25 16:26:04.653128 coreos-metadata[705]: Jun 25 16:26:04.653 INFO wrote hostname ci-3815-2-4-3-54e11b9a94.novalocal to /sysroot/etc/hostname Jun 25 16:26:04.656007 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 25 16:26:04.656125 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 25 16:26:04.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.723265 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:26:04.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.738249 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:26:04.739540 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:26:04.748498 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:26:04.751711 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:04.772849 ignition[818]: INFO : Ignition 2.15.0 Jun 25 16:26:04.772849 ignition[818]: INFO : Stage: mount Jun 25 16:26:04.774191 ignition[818]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:04.774191 ignition[818]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 16:26:04.775570 ignition[818]: INFO : mount: mount passed Jun 25 16:26:04.775570 ignition[818]: INFO : Ignition finished successfully Jun 25 16:26:04.784315 kernel: kauditd_printk_skb: 26 callbacks suppressed Jun 25 16:26:04.784339 kernel: audit: type=1130 audit(1719332764.777:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.777298 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:26:04.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.785262 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:26:04.791363 kernel: audit: type=1130 audit(1719332764.785:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:04.786045 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:26:05.448414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:26:05.461975 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (828) Jun 25 16:26:05.471737 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:05.471800 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:26:05.474993 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:26:05.488135 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:26:05.526343 ignition[846]: INFO : Ignition 2.15.0 Jun 25 16:26:05.528178 ignition[846]: INFO : Stage: files Jun 25 16:26:05.529625 ignition[846]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:05.531256 ignition[846]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 16:26:05.535727 ignition[846]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:26:05.539636 ignition[846]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:26:05.541557 ignition[846]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:26:05.550290 ignition[846]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:26:05.552899 ignition[846]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:26:05.555737 unknown[846]: wrote ssh authorized keys file for user: core Jun 25 16:26:05.557435 ignition[846]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:26:05.562173 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 16:26:05.564392 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 16:26:05.566397 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:26:05.566397 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:26:05.638522 systemd-networkd[658]: eth0: Gained IPv6LL Jun 25 16:26:05.654808 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 16:26:05.975893 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:26:05.975893 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:26:05.980875 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:26:06.495241 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 16:26:08.142012 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:26:08.142012 ignition[846]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 16:26:08.147294 ignition[846]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 16:26:08.147294 ignition[846]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 16:26:08.147294 ignition[846]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 16:26:08.147294 ignition[846]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 16:26:08.147294 ignition[846]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:26:08.147294 ignition[846]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:26:08.147294 ignition[846]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 16:26:08.147294 ignition[846]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:26:08.147294 ignition[846]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:26:08.147294 ignition[846]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:26:08.147294 ignition[846]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:26:08.147294 ignition[846]: INFO : files: files passed Jun 25 16:26:08.147294 ignition[846]: INFO : Ignition finished successfully Jun 25 16:26:08.182193 kernel: audit: type=1130 audit(1719332768.147:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.182215 kernel: audit: type=1130 audit(1719332768.168:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.182227 kernel: audit: type=1131 audit(1719332768.168:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.146410 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:26:08.160757 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:26:08.165287 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:26:08.167386 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:26:08.167621 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:26:08.188868 initrd-setup-root-after-ignition[872]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:26:08.188868 initrd-setup-root-after-ignition[872]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:26:08.190996 initrd-setup-root-after-ignition[876]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:26:08.193649 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:26:08.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.196000 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:26:08.210012 kernel: audit: type=1130 audit(1719332768.194:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.215065 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:26:08.251126 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:26:08.252761 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:26:08.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.256814 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:26:08.277647 kernel: audit: type=1130 audit(1719332768.255:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.277704 kernel: audit: type=1131 audit(1719332768.255:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.279185 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:26:08.282077 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:26:08.290195 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:26:08.321260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:26:08.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.332971 kernel: audit: type=1130 audit(1719332768.321:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.335288 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:26:08.358160 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:26:08.361496 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:26:08.363125 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:26:08.366005 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:26:08.377818 kernel: audit: type=1131 audit(1719332768.367:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.366297 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:26:08.368824 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:26:08.379216 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:26:08.382182 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:26:08.384650 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:26:08.387106 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:26:08.389873 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:26:08.392695 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:26:08.395530 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:26:08.398307 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:26:08.401211 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:26:08.403987 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:26:08.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.406176 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:26:08.406531 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:26:08.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.409505 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:26:08.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.411737 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:26:08.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.412235 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:26:08.414811 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:26:08.415246 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:26:08.417586 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:26:08.418145 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:26:08.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.431186 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:26:08.432626 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:26:08.432986 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:26:08.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.438900 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:26:08.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.444533 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:26:08.444917 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:26:08.446433 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:26:08.449031 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:26:08.457838 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:26:08.457965 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:26:08.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.474475 ignition[890]: INFO : Ignition 2.15.0 Jun 25 16:26:08.474475 ignition[890]: INFO : Stage: umount Jun 25 16:26:08.474475 ignition[890]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:08.474475 ignition[890]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 16:26:08.474173 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:26:08.477596 ignition[890]: INFO : umount: umount passed Jun 25 16:26:08.477596 ignition[890]: INFO : Ignition finished successfully Jun 25 16:26:08.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.478284 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:26:08.478391 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:26:08.479171 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:26:08.479250 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:26:08.479820 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:26:08.479863 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:26:08.480552 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 16:26:08.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.480595 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 16:26:08.481155 systemd[1]: Stopped target network.target - Network. Jun 25 16:26:08.481663 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:26:08.481711 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:26:08.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.482302 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:26:08.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.482789 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:26:08.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.497000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:26:08.484024 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:26:08.484822 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:26:08.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.485783 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:26:08.486810 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:26:08.486840 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:26:08.487812 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:26:08.487838 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:26:08.488867 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:26:08.488912 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:26:08.490419 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:26:08.491170 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:26:08.494447 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:26:08.494576 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:26:08.494961 systemd-networkd[658]: eth0: DHCPv6 lease lost Jun 25 16:26:08.511000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:26:08.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.496521 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:26:08.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.496641 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:26:08.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.497694 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:26:08.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.497785 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:26:08.498564 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:26:08.498593 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:26:08.499592 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:26:08.499636 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:26:08.506582 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:26:08.511543 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:26:08.511603 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:26:08.512981 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:26:08.513027 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:26:08.514025 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:26:08.514063 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:26:08.514996 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:26:08.515034 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:26:08.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.520688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:26:08.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.524513 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:26:08.524629 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:26:08.533061 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:26:08.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.533215 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:26:08.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.534220 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:26:08.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.534329 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:26:08.535566 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:26:08.535606 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:26:08.538266 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:26:08.538352 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:26:08.539975 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:26:08.540073 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:26:08.541848 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:26:08.541969 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:26:08.543456 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:26:08.543541 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:26:08.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.552636 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:26:08.553158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:26:08.553215 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:26:08.560468 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:26:08.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.560571 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:26:08.561283 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:26:08.566262 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:26:08.574000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:26:08.574000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:26:08.574000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:26:08.575056 systemd[1]: Switching root. Jun 25 16:26:08.580000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:26:08.580000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:26:08.586880 systemd-journald[180]: Journal stopped Jun 25 16:26:09.898022 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Jun 25 16:26:09.898088 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:26:09.898118 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:26:09.898139 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:26:09.898158 kernel: SELinux: policy capability open_perms=1 Jun 25 16:26:09.898171 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:26:09.898183 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:26:09.898194 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:26:09.898211 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:26:09.898230 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:26:09.898248 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:26:09.898265 systemd[1]: Successfully loaded SELinux policy in 81.853ms. Jun 25 16:26:09.898281 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.034ms. Jun 25 16:26:09.898299 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:26:09.898321 systemd[1]: Detected virtualization kvm. Jun 25 16:26:09.898342 systemd[1]: Detected architecture x86-64. Jun 25 16:26:09.898358 systemd[1]: Detected first boot. Jun 25 16:26:09.898376 systemd[1]: Hostname set to . Jun 25 16:26:09.898393 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:26:09.898413 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:26:09.898435 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:26:09.898459 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 16:26:09.898478 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:26:09.898496 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:26:09.898517 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:26:09.898543 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:26:09.898562 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:26:09.898581 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:26:09.898599 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:26:09.898621 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:26:09.898642 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:26:09.898660 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:26:09.898679 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:26:09.898703 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:26:09.898726 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:26:09.898745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:26:09.898763 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:26:09.898783 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:26:09.898804 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:26:09.898822 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:26:09.898840 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:26:09.898862 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:26:09.898882 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:26:09.898902 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:26:09.898945 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:26:09.898966 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:26:09.898984 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:26:09.899005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:26:09.899025 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:26:09.899044 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:26:09.899064 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:26:09.899087 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:26:09.899103 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:09.899116 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:26:09.899129 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:26:09.899142 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:26:09.899156 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:26:09.899170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:26:09.899185 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:26:09.899199 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:26:09.899212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:26:09.899226 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:26:09.899240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:26:09.899254 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:26:09.899267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:26:09.899280 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:26:09.899294 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 16:26:09.899310 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jun 25 16:26:09.899322 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:26:09.899335 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:26:09.899348 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:26:09.899361 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:26:09.899374 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:26:09.899389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:09.899402 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:26:09.899418 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:26:09.899431 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:26:09.899444 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:26:09.899457 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:26:09.899470 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:26:09.899483 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:26:09.899495 kernel: kauditd_printk_skb: 42 callbacks suppressed Jun 25 16:26:09.899509 kernel: audit: type=1130 audit(1719332769.872:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.899524 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:26:09.899538 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:26:09.899551 kernel: audit: type=1130 audit(1719332769.879:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.899565 kernel: audit: type=1131 audit(1719332769.882:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.899580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:26:09.899593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:26:09.899606 kernel: audit: type=1130 audit(1719332769.888:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.899620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:26:09.899633 kernel: audit: type=1305 audit(1719332769.882:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:26:09.899660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:26:09.899674 kernel: audit: type=1131 audit(1719332769.891:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.899692 systemd-journald[1019]: Journal started Jun 25 16:26:09.899741 systemd-journald[1019]: Runtime Journal (/run/log/journal/de8de52328094ffd9f6a5ac839821496) is 4.9M, max 39.3M, 34.4M free. Jun 25 16:26:09.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.882000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:26:09.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.909791 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:26:09.909835 kernel: audit: type=1300 audit(1719332769.882:93): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc662cd250 a2=4000 a3=7ffc662cd2ec items=0 ppid=1 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:09.882000 audit[1019]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc662cd250 a2=4000 a3=7ffc662cd2ec items=0 ppid=1 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:09.903099 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:26:09.908338 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:26:09.909211 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:26:09.911087 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:26:09.919819 kernel: audit: type=1327 audit(1719332769.882:93): proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:26:09.919875 kernel: audit: type=1130 audit(1719332769.900:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.919894 kernel: audit: type=1131 audit(1719332769.900:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.882000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:26:09.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.917045 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:26:09.923982 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:26:09.926833 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:26:09.929436 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:26:09.930193 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:26:09.933994 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:26:09.949681 systemd-journald[1019]: Time spent on flushing to /var/log/journal/de8de52328094ffd9f6a5ac839821496 is 68.534ms for 1016 entries. Jun 25 16:26:09.949681 systemd-journald[1019]: System Journal (/var/log/journal/de8de52328094ffd9f6a5ac839821496) is 8.0M, max 584.8M, 576.8M free. Jun 25 16:26:10.049786 systemd-journald[1019]: Received client request to flush runtime journal. Jun 25 16:26:10.049857 kernel: loop: module loaded Jun 25 16:26:10.049882 kernel: ACPI: bus type drm_connector registered Jun 25 16:26:10.049901 kernel: fuse: init (API version 7.37) Jun 25 16:26:09.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.937881 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:26:10.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.940405 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:26:09.953763 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:26:09.954113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:26:09.955040 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:26:09.977175 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:26:09.977890 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:26:10.021152 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:26:10.021392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:26:10.034327 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:26:10.051287 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:26:10.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.066159 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:26:10.066396 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:26:10.071155 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:26:10.076262 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:26:10.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.080699 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:26:10.085166 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:26:10.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.096528 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:26:10.103182 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:26:10.104205 udevadm[1071]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:26:10.135411 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:26:10.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.140173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:26:10.166190 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:26:10.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.815563 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:26:10.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.822413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:26:10.886106 systemd-udevd[1083]: Using default interface naming scheme 'v252'. Jun 25 16:26:10.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.942238 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:26:10.957209 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:26:10.973399 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:26:11.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.024312 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:26:11.036480 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 25 16:26:11.042975 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1085) Jun 25 16:26:11.062732 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1090) Jun 25 16:26:11.115114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:26:11.119086 systemd-networkd[1092]: lo: Link UP Jun 25 16:26:11.119099 systemd-networkd[1092]: lo: Gained carrier Jun 25 16:26:11.119533 systemd-networkd[1092]: Enumeration completed Jun 25 16:26:11.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.119664 systemd-networkd[1092]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:26:11.119671 systemd-networkd[1092]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:26:11.119680 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:26:11.123034 systemd-networkd[1092]: eth0: Link UP Jun 25 16:26:11.123045 systemd-networkd[1092]: eth0: Gained carrier Jun 25 16:26:11.123058 systemd-networkd[1092]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:26:11.127156 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:26:11.137282 systemd-networkd[1092]: eth0: DHCPv4 address 172.24.4.182/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 25 16:26:11.158948 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:26:11.174957 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:26:11.186949 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 16:26:11.223944 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:26:11.232954 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:26:11.265953 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 25 16:26:11.266038 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 25 16:26:11.297695 kernel: Console: switching to colour dummy device 80x25 Jun 25 16:26:11.298969 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 25 16:26:11.299057 kernel: [drm] features: -context_init Jun 25 16:26:11.302013 kernel: [drm] number of scanouts: 1 Jun 25 16:26:11.302153 kernel: [drm] number of cap sets: 0 Jun 25 16:26:11.307944 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 25 16:26:11.314954 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 25 16:26:11.315952 kernel: virtio-pci 0000:00:02.0: [drm] drm_plane_enable_fb_damage_clips() not called Jun 25 16:26:11.316148 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 16:26:11.324950 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 25 16:26:11.337196 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:26:11.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.343507 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:26:11.360154 lvm[1118]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:26:11.389504 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:26:11.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.390570 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:26:11.399223 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:26:11.403500 lvm[1120]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:26:11.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.431164 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:26:11.432294 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:26:11.433531 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:26:11.433595 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:26:11.434741 systemd[1]: Reached target machines.target - Containers. Jun 25 16:26:11.445223 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:26:11.448475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:26:11.448831 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:11.453051 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:26:11.457825 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:26:11.462553 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:26:11.473009 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:26:11.485535 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1123 (bootctl) Jun 25 16:26:11.489210 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:26:11.546001 kernel: loop0: detected capacity change from 0 to 139360 Jun 25 16:26:11.581965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:26:11.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.276472 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:26:12.279284 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:26:12.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.304991 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:26:12.343986 kernel: loop1: detected capacity change from 0 to 8 Jun 25 16:26:12.364590 kernel: loop2: detected capacity change from 0 to 80584 Jun 25 16:26:12.441302 kernel: loop3: detected capacity change from 0 to 209816 Jun 25 16:26:12.447164 systemd-fsck[1131]: fsck.fat 4.2 (2021-01-31) Jun 25 16:26:12.447164 systemd-fsck[1131]: /dev/vda1: 808 files, 120378/258078 clusters Jun 25 16:26:12.452353 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:26:12.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.459148 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:26:12.485351 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:26:12.518786 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:26:12.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.523950 kernel: loop4: detected capacity change from 0 to 139360 Jun 25 16:26:12.579008 kernel: loop5: detected capacity change from 0 to 8 Jun 25 16:26:12.583134 kernel: loop6: detected capacity change from 0 to 80584 Jun 25 16:26:12.634017 kernel: loop7: detected capacity change from 0 to 209816 Jun 25 16:26:12.675487 (sd-sysext)[1142]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 25 16:26:12.676079 (sd-sysext)[1142]: Merged extensions into '/usr'. Jun 25 16:26:12.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.679068 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:26:12.692151 systemd[1]: Starting ensure-sysext.service... Jun 25 16:26:12.696908 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:26:12.715132 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:26:12.722592 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:26:12.723873 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:26:12.725184 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:26:12.726007 systemd[1]: Reloading. Jun 25 16:26:12.935360 systemd-networkd[1092]: eth0: Gained IPv6LL Jun 25 16:26:12.974546 ldconfig[1122]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:26:12.992325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:26:13.067703 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:26:13.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.073503 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:26:13.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.080544 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:26:13.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.091097 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:26:13.103067 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:26:13.109502 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:26:13.117724 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:26:13.131079 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:26:13.135181 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:26:13.145824 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:13.146974 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:26:13.149665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:26:13.158000 audit[1229]: SYSTEM_BOOT pid=1229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.166581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:26:13.170699 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:26:13.173818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:26:13.174154 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:13.174299 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:13.175410 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:26:13.175600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:26:13.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.185089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:26:13.185284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:26:13.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.191959 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:26:13.193113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:26:13.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.208743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:26:13.209016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:26:13.212606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:13.212911 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:26:13.220223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:26:13.223967 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:26:13.236267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:26:13.238591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:26:13.238785 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:13.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.239561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:13.241095 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:26:13.242507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:26:13.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.242689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:26:13.246819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:26:13.247011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:26:13.253277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:13.253593 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:26:13.259329 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:26:13.263062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:26:13.281417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:26:13.282184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:26:13.282349 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:13.282530 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:13.283915 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:26:13.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.289553 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:26:13.289788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:26:13.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.291526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:26:13.291716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:26:13.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.306641 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:26:13.306909 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:26:13.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.314084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:26:13.314330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:26:13.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.318968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:26:13.319081 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:26:13.323155 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:26:13.328646 systemd[1]: Finished ensure-sysext.service. Jun 25 16:26:13.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.340000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:26:13.340000 audit[1257]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf0fb5380 a2=420 a3=0 items=0 ppid=1218 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:13.340000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:26:13.342063 augenrules[1257]: No rules Jun 25 16:26:13.342696 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:26:13.357074 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:26:13.390585 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:26:13.391306 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:26:13.413444 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:26:13.414325 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:26:13.417372 systemd-resolved[1227]: Positive Trust Anchors: Jun 25 16:26:13.417393 systemd-resolved[1227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:26:13.417433 systemd-resolved[1227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:26:13.423276 systemd-resolved[1227]: Using system hostname 'ci-3815-2-4-3-54e11b9a94.novalocal'. Jun 25 16:26:13.428388 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:26:13.429710 systemd[1]: Reached target network.target - Network. Jun 25 16:26:13.430236 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:26:13.430684 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:26:13.431152 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:26:13.431678 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:26:13.432178 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:26:13.432794 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:26:13.433704 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:26:13.435777 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:26:13.437889 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:26:13.437945 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:26:13.988451 systemd-timesyncd[1228]: Contacted time server 82.65.248.56:123 (0.flatcar.pool.ntp.org). Jun 25 16:26:13.988927 systemd-timesyncd[1228]: Initial clock synchronization to Tue 2024-06-25 16:26:13.988326 UTC. Jun 25 16:26:13.989087 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:26:13.989132 systemd-resolved[1227]: Clock change detected. Flushing caches. Jun 25 16:26:13.992116 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:26:14.001725 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:26:14.005361 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:26:14.007138 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:14.007582 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:26:14.008313 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:26:14.008772 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:26:14.009490 systemd[1]: System is tainted: cgroupsv1 Jun 25 16:26:14.009540 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:26:14.009563 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:26:14.011082 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:26:14.013795 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 16:26:14.037285 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:26:14.052145 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:26:14.069118 jq[1280]: false Jun 25 16:26:14.074168 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:26:14.079627 dbus-daemon[1278]: [system] SELinux support is enabled Jun 25 16:26:14.079714 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:26:14.089509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:14.094033 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:26:14.101154 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:26:14.115088 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:26:14.119457 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:26:14.122266 extend-filesystems[1282]: Found loop4 Jun 25 16:26:14.124543 extend-filesystems[1282]: Found loop5 Jun 25 16:26:14.126308 extend-filesystems[1282]: Found loop6 Jun 25 16:26:14.128040 extend-filesystems[1282]: Found loop7 Jun 25 16:26:14.128040 extend-filesystems[1282]: Found vda Jun 25 16:26:14.128040 extend-filesystems[1282]: Found vda1 Jun 25 16:26:14.128040 extend-filesystems[1282]: Found vda2 Jun 25 16:26:14.128040 extend-filesystems[1282]: Found vda3 Jun 25 16:26:14.128040 extend-filesystems[1282]: Found usr Jun 25 16:26:14.128040 extend-filesystems[1282]: Found vda4 Jun 25 16:26:14.128040 extend-filesystems[1282]: Found vda6 Jun 25 16:26:14.128040 extend-filesystems[1282]: Found vda7 Jun 25 16:26:14.128040 extend-filesystems[1282]: Found vda9 Jun 25 16:26:14.128040 extend-filesystems[1282]: Checking size of /dev/vda9 Jun 25 16:26:14.272735 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jun 25 16:26:14.272800 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1112) Jun 25 16:26:14.134268 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:26:14.273115 extend-filesystems[1282]: Resized partition /dev/vda9 Jun 25 16:26:14.143656 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:26:14.285348 extend-filesystems[1311]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:26:14.172742 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:14.172854 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:26:14.179181 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:26:14.199102 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:26:14.310558 update_engine[1310]: I0625 16:26:14.239234 1310 main.cc:92] Flatcar Update Engine starting Jun 25 16:26:14.310558 update_engine[1310]: I0625 16:26:14.244582 1310 update_check_scheduler.cc:74] Next update check in 5m33s Jun 25 16:26:14.252248 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:26:14.311078 jq[1312]: true Jun 25 16:26:14.261590 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:26:14.262185 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:26:14.263950 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:26:14.264440 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:26:14.269419 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:26:14.292619 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:26:14.292944 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:26:14.331097 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:26:14.337412 tar[1321]: linux-amd64/helm Jun 25 16:26:14.342513 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:26:14.342553 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:26:14.343119 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:26:14.343145 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:26:14.346143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:26:14.348548 jq[1324]: true Jun 25 16:26:14.351184 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:26:14.395023 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jun 25 16:26:14.412560 systemd-logind[1302]: New seat seat0. Jun 25 16:26:14.505510 locksmithd[1326]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:26:14.536628 systemd-logind[1302]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:26:14.536647 systemd-logind[1302]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:26:14.541600 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:26:14.548704 extend-filesystems[1311]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 16:26:14.548704 extend-filesystems[1311]: old_desc_blocks = 1, new_desc_blocks = 3 Jun 25 16:26:14.548704 extend-filesystems[1311]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jun 25 16:26:14.576406 extend-filesystems[1282]: Resized filesystem in /dev/vda9 Jun 25 16:26:14.550493 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:26:14.550932 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:26:14.596553 bash[1342]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:26:14.597687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:26:14.609555 systemd[1]: Starting sshkeys.service... Jun 25 16:26:14.622581 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 16:26:14.638052 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 16:26:14.969356 containerd[1325]: time="2024-06-25T16:26:14.969163381Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:26:15.056864 containerd[1325]: time="2024-06-25T16:26:15.056809381Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:26:15.056864 containerd[1325]: time="2024-06-25T16:26:15.056862059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:15.066692 containerd[1325]: time="2024-06-25T16:26:15.066560659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:26:15.066692 containerd[1325]: time="2024-06-25T16:26:15.066598250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:15.066909 containerd[1325]: time="2024-06-25T16:26:15.066876622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:26:15.066909 containerd[1325]: time="2024-06-25T16:26:15.066904885Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:26:15.067062 containerd[1325]: time="2024-06-25T16:26:15.067035209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:15.067128 containerd[1325]: time="2024-06-25T16:26:15.067112053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:26:15.067160 containerd[1325]: time="2024-06-25T16:26:15.067130227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:15.067905 containerd[1325]: time="2024-06-25T16:26:15.067217251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:15.067905 containerd[1325]: time="2024-06-25T16:26:15.067482839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:15.067905 containerd[1325]: time="2024-06-25T16:26:15.067503307Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:26:15.067905 containerd[1325]: time="2024-06-25T16:26:15.067516642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:15.067905 containerd[1325]: time="2024-06-25T16:26:15.067661935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:26:15.067905 containerd[1325]: time="2024-06-25T16:26:15.067680700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:26:15.067905 containerd[1325]: time="2024-06-25T16:26:15.067749299Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:26:15.067905 containerd[1325]: time="2024-06-25T16:26:15.067766270Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:26:15.083566 containerd[1325]: time="2024-06-25T16:26:15.083514676Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:26:15.083566 containerd[1325]: time="2024-06-25T16:26:15.083560963Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:26:15.083566 containerd[1325]: time="2024-06-25T16:26:15.083577975Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:26:15.083809 containerd[1325]: time="2024-06-25T16:26:15.083622829Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:26:15.083809 containerd[1325]: time="2024-06-25T16:26:15.083643658Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:26:15.083809 containerd[1325]: time="2024-06-25T16:26:15.083657725Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:26:15.083809 containerd[1325]: time="2024-06-25T16:26:15.083720322Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:26:15.083909 containerd[1325]: time="2024-06-25T16:26:15.083872548Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:26:15.083909 containerd[1325]: time="2024-06-25T16:26:15.083893126Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:26:15.083961 containerd[1325]: time="2024-06-25T16:26:15.083910068Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:26:15.083961 containerd[1325]: time="2024-06-25T16:26:15.083927340Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:26:15.083961 containerd[1325]: time="2024-06-25T16:26:15.083944292Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:26:15.084079 containerd[1325]: time="2024-06-25T16:26:15.083964590Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:26:15.084079 containerd[1325]: time="2024-06-25T16:26:15.083982203Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:26:15.084079 containerd[1325]: time="2024-06-25T16:26:15.084034451Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:26:15.084079 containerd[1325]: time="2024-06-25T16:26:15.084051473Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:26:15.084079 containerd[1325]: time="2024-06-25T16:26:15.084075057Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:26:15.084210 containerd[1325]: time="2024-06-25T16:26:15.084090927Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:26:15.084210 containerd[1325]: time="2024-06-25T16:26:15.084106577Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:26:15.084300 containerd[1325]: time="2024-06-25T16:26:15.084248653Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084631842Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084669132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084685973Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084711992Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084762056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084777986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084797332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084812140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084827118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084842196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084861462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084875980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.084891388Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:26:15.085315 containerd[1325]: time="2024-06-25T16:26:15.085060636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085673 containerd[1325]: time="2024-06-25T16:26:15.085083238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085673 containerd[1325]: time="2024-06-25T16:26:15.085097766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085673 containerd[1325]: time="2024-06-25T16:26:15.085112213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085673 containerd[1325]: time="2024-06-25T16:26:15.085125948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085673 containerd[1325]: time="2024-06-25T16:26:15.085141057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085673 containerd[1325]: time="2024-06-25T16:26:15.085156456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085673 containerd[1325]: time="2024-06-25T16:26:15.085168889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:26:15.085830 containerd[1325]: time="2024-06-25T16:26:15.085440007Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:26:15.085830 containerd[1325]: time="2024-06-25T16:26:15.085515349Z" level=info msg="Connect containerd service" Jun 25 16:26:15.085830 containerd[1325]: time="2024-06-25T16:26:15.085549262Z" level=info msg="using legacy CRI server" Jun 25 16:26:15.085830 containerd[1325]: time="2024-06-25T16:26:15.085558309Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:26:15.085830 containerd[1325]: time="2024-06-25T16:26:15.085582404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:26:15.086279 containerd[1325]: time="2024-06-25T16:26:15.086242943Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.088730699Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.088802203Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.088822702Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.088846687Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.089479924Z" level=info msg="Start subscribing containerd event" Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.089581425Z" level=info msg="Start recovering state" Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.089662847Z" level=info msg="Start event monitor" Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.089689487Z" level=info msg="Start snapshots syncer" Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.089700768Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:26:15.091012 containerd[1325]: time="2024-06-25T16:26:15.089713883Z" level=info msg="Start streaming server" Jun 25 16:26:15.095575 containerd[1325]: time="2024-06-25T16:26:15.095533297Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:26:15.095859 containerd[1325]: time="2024-06-25T16:26:15.095841065Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:26:15.098048 containerd[1325]: time="2024-06-25T16:26:15.098032615Z" level=info msg="containerd successfully booted in 0.134230s" Jun 25 16:26:15.098161 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:26:15.345039 tar[1321]: linux-amd64/LICENSE Jun 25 16:26:15.345039 tar[1321]: linux-amd64/README.md Jun 25 16:26:15.357166 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:26:15.643162 sshd_keygen[1318]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:26:15.681222 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:26:15.692457 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:26:15.707039 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:26:15.707381 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:26:15.718416 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:26:15.740766 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:26:15.746538 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:26:15.749545 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:26:15.750437 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:26:16.081443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:18.156414 kubelet[1401]: E0625 16:26:18.156135 1401 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:26:18.159369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:26:18.159586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:26:18.283124 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:26:18.304029 systemd[1]: Started sshd@0-172.24.4.182:22-172.24.4.1:38596.service - OpenSSH per-connection server daemon (172.24.4.1:38596). Jun 25 16:26:19.658278 sshd[1410]: Accepted publickey for core from 172.24.4.1 port 38596 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:26:19.664426 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:19.694653 systemd-logind[1302]: New session 1 of user core. Jun 25 16:26:19.699071 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:26:19.713731 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:26:19.754112 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:26:19.772609 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:26:19.790145 (systemd)[1415]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:19.933498 systemd[1415]: Queued start job for default target default.target. Jun 25 16:26:19.933749 systemd[1415]: Reached target paths.target - Paths. Jun 25 16:26:19.933768 systemd[1415]: Reached target sockets.target - Sockets. Jun 25 16:26:19.933782 systemd[1415]: Reached target timers.target - Timers. Jun 25 16:26:19.933796 systemd[1415]: Reached target basic.target - Basic System. Jun 25 16:26:19.933850 systemd[1415]: Reached target default.target - Main User Target. Jun 25 16:26:19.933876 systemd[1415]: Startup finished in 129ms. Jun 25 16:26:19.935638 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:26:19.950927 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:26:20.306546 systemd[1]: Started sshd@1-172.24.4.182:22-172.24.4.1:38606.service - OpenSSH per-connection server daemon (172.24.4.1:38606). Jun 25 16:26:21.159593 coreos-metadata[1274]: Jun 25 16:26:21.159 WARN failed to locate config-drive, using the metadata service API instead Jun 25 16:26:21.261840 coreos-metadata[1274]: Jun 25 16:26:21.261 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 25 16:26:21.462217 coreos-metadata[1274]: Jun 25 16:26:21.461 INFO Fetch successful Jun 25 16:26:21.462434 coreos-metadata[1274]: Jun 25 16:26:21.462 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 16:26:21.470298 coreos-metadata[1274]: Jun 25 16:26:21.470 INFO Fetch successful Jun 25 16:26:21.470471 coreos-metadata[1274]: Jun 25 16:26:21.470 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 25 16:26:21.484122 coreos-metadata[1274]: Jun 25 16:26:21.483 INFO Fetch successful Jun 25 16:26:21.484296 coreos-metadata[1274]: Jun 25 16:26:21.484 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 25 16:26:21.495272 coreos-metadata[1274]: Jun 25 16:26:21.495 INFO Fetch successful Jun 25 16:26:21.495501 coreos-metadata[1274]: Jun 25 16:26:21.495 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 25 16:26:21.505933 coreos-metadata[1274]: Jun 25 16:26:21.505 INFO Fetch successful Jun 25 16:26:21.506169 coreos-metadata[1274]: Jun 25 16:26:21.506 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 25 16:26:21.519832 coreos-metadata[1274]: Jun 25 16:26:21.519 INFO Fetch successful Jun 25 16:26:21.536644 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 16:26:21.537639 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:26:21.600499 sshd[1424]: Accepted publickey for core from 172.24.4.1 port 38606 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:26:21.603714 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:21.616873 systemd-logind[1302]: New session 2 of user core. Jun 25 16:26:21.622729 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:26:21.799114 coreos-metadata[1365]: Jun 25 16:26:21.798 WARN failed to locate config-drive, using the metadata service API instead Jun 25 16:26:21.882645 coreos-metadata[1365]: Jun 25 16:26:21.882 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 25 16:26:21.899764 coreos-metadata[1365]: Jun 25 16:26:21.899 INFO Fetch successful Jun 25 16:26:21.899889 coreos-metadata[1365]: Jun 25 16:26:21.899 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 16:26:21.915145 coreos-metadata[1365]: Jun 25 16:26:21.915 INFO Fetch successful Jun 25 16:26:21.917554 unknown[1365]: wrote ssh authorized keys file for user: core Jun 25 16:26:21.961793 update-ssh-keys[1438]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:26:21.963319 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 16:26:21.966969 systemd[1]: Finished sshkeys.service. Jun 25 16:26:21.967902 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:26:21.975634 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:26:21.991466 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:26:21.991921 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:26:21.993195 systemd[1]: Startup finished in 9.266s (kernel) + 12.620s (userspace) = 21.886s. Jun 25 16:26:22.311716 sshd[1424]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:22.323813 systemd[1]: Started sshd@2-172.24.4.182:22-172.24.4.1:38618.service - OpenSSH per-connection server daemon (172.24.4.1:38618). Jun 25 16:26:22.325956 systemd[1]: sshd@1-172.24.4.182:22-172.24.4.1:38606.service: Deactivated successfully. Jun 25 16:26:22.334073 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:26:22.335323 systemd-logind[1302]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:26:22.338735 systemd-logind[1302]: Removed session 2. Jun 25 16:26:23.656588 sshd[1446]: Accepted publickey for core from 172.24.4.1 port 38618 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:26:23.658783 sshd[1446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:23.667453 systemd-logind[1302]: New session 3 of user core. Jun 25 16:26:23.678659 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:26:24.299404 sshd[1446]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:24.302323 systemd[1]: Started sshd@3-172.24.4.182:22-172.24.4.1:38634.service - OpenSSH per-connection server daemon (172.24.4.1:38634). Jun 25 16:26:24.302898 systemd[1]: sshd@2-172.24.4.182:22-172.24.4.1:38618.service: Deactivated successfully. Jun 25 16:26:24.305583 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:26:24.306389 systemd-logind[1302]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:26:24.318436 systemd-logind[1302]: Removed session 3. Jun 25 16:26:25.463969 sshd[1453]: Accepted publickey for core from 172.24.4.1 port 38634 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:26:25.469305 sshd[1453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:25.483136 systemd-logind[1302]: New session 4 of user core. Jun 25 16:26:25.491909 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:26:26.101346 sshd[1453]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:26.111275 systemd[1]: Started sshd@4-172.24.4.182:22-172.24.4.1:58446.service - OpenSSH per-connection server daemon (172.24.4.1:58446). Jun 25 16:26:26.113866 systemd[1]: sshd@3-172.24.4.182:22-172.24.4.1:38634.service: Deactivated successfully. Jun 25 16:26:26.117861 systemd-logind[1302]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:26:26.120037 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:26:26.122503 systemd-logind[1302]: Removed session 4. Jun 25 16:26:27.472718 sshd[1460]: Accepted publickey for core from 172.24.4.1 port 58446 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:26:27.476464 sshd[1460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:27.489298 systemd-logind[1302]: New session 5 of user core. Jun 25 16:26:27.497658 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:26:27.974658 sudo[1466]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:26:27.976130 sudo[1466]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:26:28.001472 sudo[1466]: pam_unix(sudo:session): session closed for user root Jun 25 16:26:28.209488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:26:28.209941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:28.212697 sshd[1460]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:28.221208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:28.231255 systemd[1]: Started sshd@5-172.24.4.182:22-172.24.4.1:58458.service - OpenSSH per-connection server daemon (172.24.4.1:58458). Jun 25 16:26:28.237522 systemd[1]: sshd@4-172.24.4.182:22-172.24.4.1:58446.service: Deactivated successfully. Jun 25 16:26:28.248860 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:26:28.256248 systemd-logind[1302]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:26:28.259718 systemd-logind[1302]: Removed session 5. Jun 25 16:26:28.590297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:28.796051 kubelet[1480]: E0625 16:26:28.795845 1480 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:26:28.804775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:26:28.805298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:26:29.679911 sshd[1470]: Accepted publickey for core from 172.24.4.1 port 58458 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:26:29.682802 sshd[1470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:29.692395 systemd-logind[1302]: New session 6 of user core. Jun 25 16:26:29.700591 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:26:30.150613 sudo[1491]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:26:30.152287 sudo[1491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:26:30.160788 sudo[1491]: pam_unix(sudo:session): session closed for user root Jun 25 16:26:30.173913 sudo[1490]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:26:30.174831 sudo[1490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:26:30.227701 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:26:30.231000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:26:30.233528 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 16:26:30.233613 kernel: audit: type=1305 audit(1719332790.231:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:26:30.233681 auditctl[1494]: No rules Jun 25 16:26:30.231000 audit[1494]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe4c1322e0 a2=420 a3=0 items=0 ppid=1 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:30.234677 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:26:30.235208 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:26:30.241368 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:26:30.241810 kernel: audit: type=1300 audit(1719332790.231:152): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe4c1322e0 a2=420 a3=0 items=0 ppid=1 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:30.231000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:26:30.250610 kernel: audit: type=1327 audit(1719332790.231:152): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:26:30.250700 kernel: audit: type=1131 audit(1719332790.234:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.286756 augenrules[1512]: No rules Jun 25 16:26:30.288734 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:26:30.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.294327 sudo[1490]: pam_unix(sudo:session): session closed for user root Jun 25 16:26:30.290000 audit[1490]: USER_END pid=1490 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.297884 kernel: audit: type=1130 audit(1719332790.288:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.297961 kernel: audit: type=1106 audit(1719332790.290:155): pid=1490 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.290000 audit[1490]: CRED_DISP pid=1490 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.306051 kernel: audit: type=1104 audit(1719332790.290:156): pid=1490 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.591912 sshd[1470]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:30.595000 audit[1470]: USER_END pid=1470 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:26:30.600015 kernel: audit: type=1106 audit(1719332790.595:157): pid=1470 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:26:30.598000 audit[1470]: CRED_DISP pid=1470 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:26:30.604013 kernel: audit: type=1104 audit(1719332790.598:158): pid=1470 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:26:30.605565 systemd[1]: Started sshd@6-172.24.4.182:22-172.24.4.1:58470.service - OpenSSH per-connection server daemon (172.24.4.1:58470). Jun 25 16:26:30.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.182:22-172.24.4.1:58470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.610014 kernel: audit: type=1130 audit(1719332790.604:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.182:22-172.24.4.1:58470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.610365 systemd[1]: sshd@5-172.24.4.182:22-172.24.4.1:58458.service: Deactivated successfully. Jun 25 16:26:30.612077 systemd-logind[1302]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:26:30.612229 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:26:30.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.24.4.182:22-172.24.4.1:58458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.617845 systemd-logind[1302]: Removed session 6. Jun 25 16:26:31.888000 audit[1517]: USER_ACCT pid=1517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:26:31.889978 sshd[1517]: Accepted publickey for core from 172.24.4.1 port 58470 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:26:31.890000 audit[1517]: CRED_ACQ pid=1517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:26:31.890000 audit[1517]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcaf423b00 a2=3 a3=7f1b1c6c3480 items=0 ppid=1 pid=1517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:31.890000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:31.892374 sshd[1517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:31.901706 systemd-logind[1302]: New session 7 of user core. Jun 25 16:26:31.908504 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:26:31.919000 audit[1517]: USER_START pid=1517 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:26:31.922000 audit[1522]: CRED_ACQ pid=1522 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:26:32.449000 audit[1523]: USER_ACCT pid=1523 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:32.450841 sudo[1523]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:26:32.450000 audit[1523]: CRED_REFR pid=1523 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:32.452292 sudo[1523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:26:32.456000 audit[1523]: USER_START pid=1523 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:32.735283 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:26:33.232903 dockerd[1532]: time="2024-06-25T16:26:33.232829472Z" level=info msg="Starting up" Jun 25 16:26:33.271938 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2453187105-merged.mount: Deactivated successfully. Jun 25 16:26:33.566787 systemd[1]: var-lib-docker-metacopy\x2dcheck392429233-merged.mount: Deactivated successfully. Jun 25 16:26:33.602911 dockerd[1532]: time="2024-06-25T16:26:33.602831240Z" level=info msg="Loading containers: start." Jun 25 16:26:33.730000 audit[1563]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.730000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffee79a8390 a2=0 a3=7ff6236e5e90 items=0 ppid=1532 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.730000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:26:33.735000 audit[1565]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.735000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffce6dd0b60 a2=0 a3=7f6eb52f9e90 items=0 ppid=1532 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.735000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:26:33.740000 audit[1567]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.740000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd1d7323a0 a2=0 a3=7f6283c0de90 items=0 ppid=1532 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.740000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:26:33.745000 audit[1569]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.745000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd295ecd20 a2=0 a3=7f1797917e90 items=0 ppid=1532 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.745000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:26:33.752000 audit[1571]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.752000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc6bf17250 a2=0 a3=7f86e48cee90 items=0 ppid=1532 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.752000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:26:33.758000 audit[1573]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.758000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe14e3f030 a2=0 a3=7f23afe64e90 items=0 ppid=1532 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.758000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:26:33.784000 audit[1575]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.784000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf6488830 a2=0 a3=7f8647cf0e90 items=0 ppid=1532 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.784000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:26:33.790000 audit[1577]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.790000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe0b3e01e0 a2=0 a3=7fa68a5b5e90 items=0 ppid=1532 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.790000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:26:33.795000 audit[1579]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.795000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd90798860 a2=0 a3=7fc639f27e90 items=0 ppid=1532 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.795000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:33.812000 audit[1583]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.812000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff5b5931b0 a2=0 a3=7fb8d5026e90 items=0 ppid=1532 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.812000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:33.814000 audit[1584]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.814000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff4a6922a0 a2=0 a3=7ff345969e90 items=0 ppid=1532 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.814000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:33.841608 kernel: Initializing XFRM netlink socket Jun 25 16:26:33.942000 audit[1592]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.942000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe4f1ac340 a2=0 a3=7ffae5986e90 items=0 ppid=1532 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.942000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:26:33.964000 audit[1595]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.964000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffca7135dd0 a2=0 a3=7f3ad58aae90 items=0 ppid=1532 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.964000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:26:33.974000 audit[1599]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.974000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcb3d43840 a2=0 a3=7f8396f3fe90 items=0 ppid=1532 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.974000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:26:33.979000 audit[1601]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.979000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc4bb83650 a2=0 a3=7f52d2533e90 items=0 ppid=1532 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.979000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:26:33.984000 audit[1603]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.984000 audit[1603]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffde46ae4c0 a2=0 a3=7f24cf0d2e90 items=0 ppid=1532 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.984000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:26:33.989000 audit[1605]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.989000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff5efed640 a2=0 a3=7f79ee023e90 items=0 ppid=1532 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.989000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:26:33.994000 audit[1607]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:33.994000 audit[1607]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff96c671d0 a2=0 a3=7fa668169e90 items=0 ppid=1532 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:33.994000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:26:34.010000 audit[1610]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:34.010000 audit[1610]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc36752970 a2=0 a3=7f8062187e90 items=0 ppid=1532 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:34.010000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:26:34.013000 audit[1612]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:34.013000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fffeca815e0 a2=0 a3=7f0dde496e90 items=0 ppid=1532 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:34.013000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:26:34.016000 audit[1614]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:34.016000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc366c2290 a2=0 a3=7f98a9d72e90 items=0 ppid=1532 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:34.016000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:26:34.019000 audit[1616]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:34.019000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc9b806920 a2=0 a3=7f5f27a99e90 items=0 ppid=1532 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:34.019000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:26:34.020937 systemd-networkd[1092]: docker0: Link UP Jun 25 16:26:34.034000 audit[1620]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:34.034000 audit[1620]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca891b820 a2=0 a3=7fec1da55e90 items=0 ppid=1532 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:34.034000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:34.035000 audit[1621]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:34.035000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcf1171210 a2=0 a3=7f6c9af66e90 items=0 ppid=1532 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:34.035000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:34.037238 dockerd[1532]: time="2024-06-25T16:26:34.037199147Z" level=info msg="Loading containers: done." Jun 25 16:26:34.163939 dockerd[1532]: time="2024-06-25T16:26:34.163811498Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:26:34.164276 dockerd[1532]: time="2024-06-25T16:26:34.164140906Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:26:34.164386 dockerd[1532]: time="2024-06-25T16:26:34.164321395Z" level=info msg="Daemon has completed initialization" Jun 25 16:26:34.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.235471 dockerd[1532]: time="2024-06-25T16:26:34.228696991Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:26:34.232277 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:26:36.242878 containerd[1325]: time="2024-06-25T16:26:36.242755257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:26:36.992491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227068297.mount: Deactivated successfully. Jun 25 16:26:38.874019 kernel: kauditd_printk_skb: 84 callbacks suppressed Jun 25 16:26:38.874147 kernel: audit: type=1130 audit(1719332798.869:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:38.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:38.870668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:26:38.879329 kernel: audit: type=1131 audit(1719332798.869:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:38.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:38.870888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:38.879276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:38.976958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:38.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:38.980086 kernel: audit: type=1130 audit(1719332798.976:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:39.346319 kubelet[1727]: E0625 16:26:39.346207 1727 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:26:39.348380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:26:39.348556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:26:39.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:39.352120 kernel: audit: type=1131 audit(1719332799.347:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:39.646884 containerd[1325]: time="2024-06-25T16:26:39.646842592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:39.648955 containerd[1325]: time="2024-06-25T16:26:39.648918345Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605186" Jun 25 16:26:39.650844 containerd[1325]: time="2024-06-25T16:26:39.650819711Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:39.655105 containerd[1325]: time="2024-06-25T16:26:39.655054012Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:39.658579 containerd[1325]: time="2024-06-25T16:26:39.658531194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:39.659975 containerd[1325]: time="2024-06-25T16:26:39.659944715Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.417104999s" Jun 25 16:26:39.660106 containerd[1325]: time="2024-06-25T16:26:39.660082373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:26:39.688445 containerd[1325]: time="2024-06-25T16:26:39.688408428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:26:42.260959 containerd[1325]: time="2024-06-25T16:26:42.260795219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:42.262881 containerd[1325]: time="2024-06-25T16:26:42.262809226Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719499" Jun 25 16:26:42.264789 containerd[1325]: time="2024-06-25T16:26:42.264730740Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:42.270780 containerd[1325]: time="2024-06-25T16:26:42.270684456Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:42.274516 containerd[1325]: time="2024-06-25T16:26:42.274441332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:42.279119 containerd[1325]: time="2024-06-25T16:26:42.278974164Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.590326567s" Jun 25 16:26:42.279244 containerd[1325]: time="2024-06-25T16:26:42.279127231Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:26:42.333051 containerd[1325]: time="2024-06-25T16:26:42.332964793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:26:44.708492 containerd[1325]: time="2024-06-25T16:26:44.708412595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:44.709925 containerd[1325]: time="2024-06-25T16:26:44.709823822Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925513" Jun 25 16:26:44.711392 containerd[1325]: time="2024-06-25T16:26:44.711370432Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:44.715731 containerd[1325]: time="2024-06-25T16:26:44.715669445Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:44.719478 containerd[1325]: time="2024-06-25T16:26:44.719456508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:44.721806 containerd[1325]: time="2024-06-25T16:26:44.721779955Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 2.388439578s" Jun 25 16:26:44.721929 containerd[1325]: time="2024-06-25T16:26:44.721908316Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:26:44.766467 containerd[1325]: time="2024-06-25T16:26:44.766435447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:26:46.235118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount669880028.mount: Deactivated successfully. Jun 25 16:26:47.181941 containerd[1325]: time="2024-06-25T16:26:47.181697384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:47.184508 containerd[1325]: time="2024-06-25T16:26:47.184335819Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118427" Jun 25 16:26:47.186095 containerd[1325]: time="2024-06-25T16:26:47.185939462Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:47.191184 containerd[1325]: time="2024-06-25T16:26:47.191116672Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:47.196169 containerd[1325]: time="2024-06-25T16:26:47.196090464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:47.200580 containerd[1325]: time="2024-06-25T16:26:47.200435828Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.433726613s" Jun 25 16:26:47.200980 containerd[1325]: time="2024-06-25T16:26:47.200896052Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:26:47.252170 containerd[1325]: time="2024-06-25T16:26:47.252061356Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:26:47.923718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204771730.mount: Deactivated successfully. Jun 25 16:26:47.942737 containerd[1325]: time="2024-06-25T16:26:47.942665403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:47.945071 containerd[1325]: time="2024-06-25T16:26:47.944941022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 16:26:47.946508 containerd[1325]: time="2024-06-25T16:26:47.946453599Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:47.952415 containerd[1325]: time="2024-06-25T16:26:47.952364328Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:47.957395 containerd[1325]: time="2024-06-25T16:26:47.957342318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:47.961334 containerd[1325]: time="2024-06-25T16:26:47.961199919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 709.011874ms" Jun 25 16:26:47.961477 containerd[1325]: time="2024-06-25T16:26:47.961331316Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:26:48.002958 containerd[1325]: time="2024-06-25T16:26:48.002902234Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:26:48.667537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3226326714.mount: Deactivated successfully. Jun 25 16:26:49.371262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:26:49.381668 kernel: audit: type=1130 audit(1719332809.371:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:49.381828 kernel: audit: type=1131 audit(1719332809.371:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:49.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:49.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:49.371952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:49.384551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:49.490313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:49.493020 kernel: audit: type=1130 audit(1719332809.489:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:49.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:49.962938 kubelet[1822]: E0625 16:26:49.962873 1822 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:26:49.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:49.964721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:26:49.964884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:26:49.968398 kernel: audit: type=1131 audit(1719332809.964:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:53.400267 containerd[1325]: time="2024-06-25T16:26:53.400181219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:53.435145 containerd[1325]: time="2024-06-25T16:26:53.435043962Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jun 25 16:26:53.459140 containerd[1325]: time="2024-06-25T16:26:53.458983033Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:53.468852 containerd[1325]: time="2024-06-25T16:26:53.468787455Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:53.474783 containerd[1325]: time="2024-06-25T16:26:53.474688228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:53.479234 containerd[1325]: time="2024-06-25T16:26:53.479144258Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.476183788s" Jun 25 16:26:53.479409 containerd[1325]: time="2024-06-25T16:26:53.479242026Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:26:53.535774 containerd[1325]: time="2024-06-25T16:26:53.535707253Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:26:54.228072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559306038.mount: Deactivated successfully. Jun 25 16:26:55.209121 containerd[1325]: time="2024-06-25T16:26:55.209067203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:55.211231 containerd[1325]: time="2024-06-25T16:26:55.211186876Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Jun 25 16:26:55.214105 containerd[1325]: time="2024-06-25T16:26:55.214056335Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:55.217630 containerd[1325]: time="2024-06-25T16:26:55.217606605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:55.223278 containerd[1325]: time="2024-06-25T16:26:55.223198523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:55.224911 containerd[1325]: time="2024-06-25T16:26:55.224870674Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.689095457s" Jun 25 16:26:55.225075 containerd[1325]: time="2024-06-25T16:26:55.225049643Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:26:59.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:59.018947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:59.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:59.023229 kernel: audit: type=1130 audit(1719332819.018:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:59.023308 kernel: audit: type=1131 audit(1719332819.020:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:59.034808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:59.086918 systemd[1]: Reloading. Jun 25 16:26:59.133365 update_engine[1310]: I0625 16:26:59.133312 1310 update_attempter.cc:509] Updating boot flags... Jun 25 16:26:59.279147 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1974) Jun 25 16:26:59.366347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:26:59.482213 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1976) Jun 25 16:26:59.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:59.514095 kernel: audit: type=1130 audit(1719332819.509:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:59.510362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:59.527904 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:59.570317 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:26:59.570648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:59.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:59.575044 kernel: audit: type=1131 audit(1719332819.569:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:59.578452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:59.610024 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1976) Jun 25 16:27:00.052276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:00.061228 kernel: audit: type=1130 audit(1719332820.051:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:00.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:00.283119 kubelet[2019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:27:00.283954 kubelet[2019]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:27:00.284147 kubelet[2019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:27:00.284387 kubelet[2019]: I0625 16:27:00.284335 2019 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:27:00.647035 kubelet[2019]: I0625 16:27:00.646916 2019 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:27:00.647200 kubelet[2019]: I0625 16:27:00.647083 2019 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:27:00.647906 kubelet[2019]: I0625 16:27:00.647873 2019 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:27:01.074895 kubelet[2019]: I0625 16:27:01.074454 2019 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:27:01.075787 kubelet[2019]: E0625 16:27:01.075720 2019 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.182:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:01.126404 kubelet[2019]: I0625 16:27:01.126317 2019 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:27:01.149087 kubelet[2019]: I0625 16:27:01.149018 2019 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:27:01.149273 kubelet[2019]: I0625 16:27:01.149246 2019 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:27:01.149273 kubelet[2019]: I0625 16:27:01.149275 2019 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:27:01.149572 kubelet[2019]: I0625 16:27:01.149288 2019 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:27:01.164488 kubelet[2019]: I0625 16:27:01.164384 2019 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:27:01.176639 kubelet[2019]: I0625 16:27:01.176313 2019 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:27:01.176855 kubelet[2019]: I0625 16:27:01.176832 2019 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:27:01.176953 kubelet[2019]: I0625 16:27:01.176891 2019 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:27:01.177813 kubelet[2019]: I0625 16:27:01.176918 2019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:27:01.187981 kubelet[2019]: W0625 16:27:01.187899 2019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815-2-4-3-54e11b9a94.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:01.188293 kubelet[2019]: E0625 16:27:01.188266 2019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815-2-4-3-54e11b9a94.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:01.189064 kubelet[2019]: W0625 16:27:01.188614 2019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:01.189064 kubelet[2019]: E0625 16:27:01.189061 2019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:01.189262 kubelet[2019]: I0625 16:27:01.189160 2019 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:27:01.242350 kubelet[2019]: W0625 16:27:01.240682 2019 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:27:01.242350 kubelet[2019]: I0625 16:27:01.242175 2019 server.go:1232] "Started kubelet" Jun 25 16:27:01.243020 kubelet[2019]: I0625 16:27:01.242913 2019 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:27:01.245233 kubelet[2019]: I0625 16:27:01.245198 2019 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:27:01.250899 kubelet[2019]: I0625 16:27:01.250861 2019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:27:01.271773 kubelet[2019]: I0625 16:27:01.271683 2019 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:27:01.272654 kubelet[2019]: I0625 16:27:01.272619 2019 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:27:01.272793 kubelet[2019]: I0625 16:27:01.272767 2019 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:27:01.279915 kubelet[2019]: W0625 16:27:01.279829 2019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:01.280319 kubelet[2019]: E0625 16:27:01.280268 2019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:01.280705 kubelet[2019]: E0625 16:27:01.280675 2019 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815-2-4-3-54e11b9a94.novalocal?timeout=10s\": dial tcp 172.24.4.182:6443: connect: connection refused" interval="200ms" Jun 25 16:27:01.281229 kubelet[2019]: E0625 16:27:01.281196 2019 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:27:01.281510 kubelet[2019]: E0625 16:27:01.281438 2019 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:27:01.300000 audit[2030]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.306113 kernel: audit: type=1325 audit(1719332821.300:207): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.314802 kernel: audit: type=1300 audit(1719332821.300:207): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd500800a0 a2=0 a3=7f6a1a4ebe90 items=0 ppid=2019 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.300000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd500800a0 a2=0 a3=7f6a1a4ebe90 items=0 ppid=2019 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:27:01.322050 kernel: audit: type=1327 audit(1719332821.300:207): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:27:01.331924 kubelet[2019]: I0625 16:27:01.327456 2019 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:27:01.331924 kubelet[2019]: I0625 16:27:01.330893 2019 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:27:01.335000 audit[2033]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.348425 kernel: audit: type=1325 audit(1719332821.335:208): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.348551 kernel: audit: type=1300 audit(1719332821.335:208): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff22060470 a2=0 a3=7f02e0746e90 items=0 ppid=2019 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.335000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff22060470 a2=0 a3=7f02e0746e90 items=0 ppid=2019 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.349796 kubelet[2019]: E0625 16:27:01.349623 2019 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815-2-4-3-54e11b9a94.novalocal.17dc4c176cce6ccc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815-2-4-3-54e11b9a94.novalocal", UID:"ci-3815-2-4-3-54e11b9a94.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815-2-4-3-54e11b9a94.novalocal"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 27, 1, 242113228, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 27, 1, 242113228, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815-2-4-3-54e11b9a94.novalocal"}': 'Post "https://172.24.4.182:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.182:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:27:01.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:27:01.355000 audit[2035]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.355000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff7edffac0 a2=0 a3=7fbf1a7eae90 items=0 ppid=2019 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:27:01.360000 audit[2037]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.360000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe8cd75d30 a2=0 a3=7fc5c6f53e90 items=0 ppid=2019 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:27:01.416383 kubelet[2019]: I0625 16:27:01.416337 2019 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.417870 kubelet[2019]: E0625 16:27:01.417363 2019 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.182:6443/api/v1/nodes\": dial tcp 172.24.4.182:6443: connect: connection refused" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.419596 kubelet[2019]: I0625 16:27:01.419570 2019 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:27:01.419745 kubelet[2019]: I0625 16:27:01.419727 2019 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:27:01.419869 kubelet[2019]: I0625 16:27:01.419852 2019 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:27:01.428000 audit[2041]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.428000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe6e6ddac0 a2=0 a3=7f0ed97a4e90 items=0 ppid=2019 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:27:01.430494 kubelet[2019]: I0625 16:27:01.430309 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:27:01.431000 audit[2042]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.431000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca8728aa0 a2=0 a3=7ff71fb69e90 items=0 ppid=2019 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:27:01.432000 audit[2043]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:01.432000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd0e024ca0 a2=0 a3=7fd79922ae90 items=0 ppid=2019 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.432000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:27:01.433960 kubelet[2019]: I0625 16:27:01.433935 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:27:01.434097 kubelet[2019]: I0625 16:27:01.434062 2019 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:27:01.434156 kubelet[2019]: I0625 16:27:01.434111 2019 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:27:01.434242 kubelet[2019]: E0625 16:27:01.434214 2019 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:27:01.437000 audit[2045]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:01.437000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe30701b10 a2=0 a3=7fc23b878e90 items=0 ppid=2019 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.437000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:27:01.440174 kubelet[2019]: W0625 16:27:01.440138 2019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:01.439000 audit[2044]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.439000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec703ecd0 a2=0 a3=7fea69e58e90 items=0 ppid=2019 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.439000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:27:01.440639 kubelet[2019]: E0625 16:27:01.440600 2019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:01.441000 audit[2047]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:01.441000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffcb756c020 a2=0 a3=7f5cb9df0e90 items=0 ppid=2019 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:27:01.441000 audit[2048]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:01.441000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfac47860 a2=0 a3=4 items=0 ppid=2019 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:27:01.444000 audit[2049]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:01.444000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffed6e968e0 a2=0 a3=7f31d18a8e90 items=0 ppid=2019 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:27:01.482271 kubelet[2019]: E0625 16:27:01.482193 2019 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815-2-4-3-54e11b9a94.novalocal?timeout=10s\": dial tcp 172.24.4.182:6443: connect: connection refused" interval="400ms" Jun 25 16:27:01.490769 kubelet[2019]: I0625 16:27:01.490707 2019 policy_none.go:49] "None policy: Start" Jun 25 16:27:01.492963 kubelet[2019]: I0625 16:27:01.492926 2019 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:27:01.493271 kubelet[2019]: I0625 16:27:01.493238 2019 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:27:01.507496 kubelet[2019]: I0625 16:27:01.507369 2019 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:27:01.510656 kubelet[2019]: I0625 16:27:01.510593 2019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:27:01.513495 kubelet[2019]: E0625 16:27:01.513450 2019 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815-2-4-3-54e11b9a94.novalocal\" not found" Jun 25 16:27:01.534864 kubelet[2019]: I0625 16:27:01.534790 2019 topology_manager.go:215] "Topology Admit Handler" podUID="d9cee256d7afdcdc36633fa79a33debb" podNamespace="kube-system" podName="kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.538767 kubelet[2019]: I0625 16:27:01.538720 2019 topology_manager.go:215] "Topology Admit Handler" podUID="df84e8824827e20f00fcfc13088b3361" podNamespace="kube-system" podName="kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.542647 kubelet[2019]: I0625 16:27:01.542589 2019 topology_manager.go:215] "Topology Admit Handler" podUID="924b59b0377ecf7d3216e9c9ee45cd4c" podNamespace="kube-system" podName="kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.577772 kubelet[2019]: I0625 16:27:01.577625 2019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-ca-certs\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.577772 kubelet[2019]: I0625 16:27:01.577741 2019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-kubeconfig\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.578145 kubelet[2019]: I0625 16:27:01.577812 2019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.578145 kubelet[2019]: I0625 16:27:01.577874 2019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/924b59b0377ecf7d3216e9c9ee45cd4c-kubeconfig\") pod \"kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"924b59b0377ecf7d3216e9c9ee45cd4c\") " pod="kube-system/kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.578145 kubelet[2019]: I0625 16:27:01.577939 2019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9cee256d7afdcdc36633fa79a33debb-ca-certs\") pod \"kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"d9cee256d7afdcdc36633fa79a33debb\") " pod="kube-system/kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.578145 kubelet[2019]: I0625 16:27:01.578078 2019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9cee256d7afdcdc36633fa79a33debb-k8s-certs\") pod \"kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"d9cee256d7afdcdc36633fa79a33debb\") " pod="kube-system/kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.578432 kubelet[2019]: I0625 16:27:01.578148 2019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9cee256d7afdcdc36633fa79a33debb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"d9cee256d7afdcdc36633fa79a33debb\") " pod="kube-system/kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.578432 kubelet[2019]: I0625 16:27:01.578206 2019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-flexvolume-dir\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.578432 kubelet[2019]: I0625 16:27:01.578269 2019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-k8s-certs\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.622027 kubelet[2019]: I0625 16:27:01.621041 2019 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.622027 kubelet[2019]: E0625 16:27:01.621637 2019 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.182:6443/api/v1/nodes\": dial tcp 172.24.4.182:6443: connect: connection refused" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:01.851081 containerd[1325]: time="2024-06-25T16:27:01.850491161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal,Uid:d9cee256d7afdcdc36633fa79a33debb,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:01.872640 containerd[1325]: time="2024-06-25T16:27:01.872087282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal,Uid:df84e8824827e20f00fcfc13088b3361,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:01.875973 containerd[1325]: time="2024-06-25T16:27:01.872133468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal,Uid:924b59b0377ecf7d3216e9c9ee45cd4c,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:01.883662 kubelet[2019]: E0625 16:27:01.883599 2019 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815-2-4-3-54e11b9a94.novalocal?timeout=10s\": dial tcp 172.24.4.182:6443: connect: connection refused" interval="800ms" Jun 25 16:27:02.027580 kubelet[2019]: I0625 16:27:02.027159 2019 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:02.028367 kubelet[2019]: E0625 16:27:02.028285 2019 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.182:6443/api/v1/nodes\": dial tcp 172.24.4.182:6443: connect: connection refused" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:02.079069 kubelet[2019]: W0625 16:27:02.078855 2019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:02.079319 kubelet[2019]: E0625 16:27:02.079094 2019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:02.290912 kubelet[2019]: W0625 16:27:02.290610 2019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:02.290912 kubelet[2019]: E0625 16:27:02.290708 2019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:02.427195 kubelet[2019]: W0625 16:27:02.426911 2019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815-2-4-3-54e11b9a94.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:02.427969 kubelet[2019]: E0625 16:27:02.427236 2019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815-2-4-3-54e11b9a94.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:02.547674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113081443.mount: Deactivated successfully. Jun 25 16:27:02.558872 containerd[1325]: time="2024-06-25T16:27:02.558767671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.562451 containerd[1325]: time="2024-06-25T16:27:02.562349120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 16:27:02.562868 containerd[1325]: time="2024-06-25T16:27:02.562797620Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.565446 containerd[1325]: time="2024-06-25T16:27:02.565378926Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.567786 containerd[1325]: time="2024-06-25T16:27:02.567727172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:27:02.570089 containerd[1325]: time="2024-06-25T16:27:02.570052836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:27:02.571175 containerd[1325]: time="2024-06-25T16:27:02.571149445Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.576516 containerd[1325]: time="2024-06-25T16:27:02.576484087Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.577951 containerd[1325]: time="2024-06-25T16:27:02.577850516Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.580775 containerd[1325]: time="2024-06-25T16:27:02.580750764Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.581381 containerd[1325]: time="2024-06-25T16:27:02.581349121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 705.238089ms" Jun 25 16:27:02.583923 containerd[1325]: time="2024-06-25T16:27:02.583259516Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.586699 containerd[1325]: time="2024-06-25T16:27:02.586624653Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.592771 containerd[1325]: time="2024-06-25T16:27:02.591843762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 737.180275ms" Jun 25 16:27:02.595623 containerd[1325]: time="2024-06-25T16:27:02.595580597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 723.288667ms" Jun 25 16:27:02.596381 containerd[1325]: time="2024-06-25T16:27:02.596351103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.597507 containerd[1325]: time="2024-06-25T16:27:02.597484581Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.598485 containerd[1325]: time="2024-06-25T16:27:02.598337439Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:27:02.621516 kubelet[2019]: W0625 16:27:02.621431 2019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:02.621516 kubelet[2019]: E0625 16:27:02.621495 2019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:02.685452 kubelet[2019]: E0625 16:27:02.685371 2019 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815-2-4-3-54e11b9a94.novalocal?timeout=10s\": dial tcp 172.24.4.182:6443: connect: connection refused" interval="1.6s" Jun 25 16:27:02.835691 kubelet[2019]: I0625 16:27:02.832146 2019 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:02.835691 kubelet[2019]: E0625 16:27:02.832764 2019 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.182:6443/api/v1/nodes\": dial tcp 172.24.4.182:6443: connect: connection refused" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:02.848040 containerd[1325]: time="2024-06-25T16:27:02.847819307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:02.848842 containerd[1325]: time="2024-06-25T16:27:02.847943928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:02.848842 containerd[1325]: time="2024-06-25T16:27:02.848491200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:02.848842 containerd[1325]: time="2024-06-25T16:27:02.848531525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:02.855379 containerd[1325]: time="2024-06-25T16:27:02.855210555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:02.856067 containerd[1325]: time="2024-06-25T16:27:02.855393594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:02.856067 containerd[1325]: time="2024-06-25T16:27:02.855434400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:02.856067 containerd[1325]: time="2024-06-25T16:27:02.855505331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:02.864070 containerd[1325]: time="2024-06-25T16:27:02.863792046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:02.864517 containerd[1325]: time="2024-06-25T16:27:02.864426830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:02.864845 containerd[1325]: time="2024-06-25T16:27:02.864722338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:02.865185 containerd[1325]: time="2024-06-25T16:27:02.865120575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:02.953975 containerd[1325]: time="2024-06-25T16:27:02.952538719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal,Uid:d9cee256d7afdcdc36633fa79a33debb,Namespace:kube-system,Attempt:0,} returns sandbox id \"609ce70be4d381825aa795193687243858781326335e9932816987e5a8095320\"" Jun 25 16:27:02.953975 containerd[1325]: time="2024-06-25T16:27:02.953851318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal,Uid:924b59b0377ecf7d3216e9c9ee45cd4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f742df08b438746499736dc6474046a9b828a8197d1c3ea6bdf892f030b3ace7\"" Jun 25 16:27:02.960475 containerd[1325]: time="2024-06-25T16:27:02.960400719Z" level=info msg="CreateContainer within sandbox \"f742df08b438746499736dc6474046a9b828a8197d1c3ea6bdf892f030b3ace7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:27:02.961012 containerd[1325]: time="2024-06-25T16:27:02.960964893Z" level=info msg="CreateContainer within sandbox \"609ce70be4d381825aa795193687243858781326335e9932816987e5a8095320\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:27:02.979650 containerd[1325]: time="2024-06-25T16:27:02.979602933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal,Uid:df84e8824827e20f00fcfc13088b3361,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f90f39ea3707129fdefbe59a3ca6e548f9abf1c25948f861792271309e3cc28\"" Jun 25 16:27:02.984003 containerd[1325]: time="2024-06-25T16:27:02.983928308Z" level=info msg="CreateContainer within sandbox \"3f90f39ea3707129fdefbe59a3ca6e548f9abf1c25948f861792271309e3cc28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:27:03.001952 containerd[1325]: time="2024-06-25T16:27:03.001857850Z" level=info msg="CreateContainer within sandbox \"609ce70be4d381825aa795193687243858781326335e9932816987e5a8095320\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cd2cd8ddee02899eb0d8a5ff68554705fb2fbea7e11ab1d03aecf72b367b2460\"" Jun 25 16:27:03.003184 containerd[1325]: time="2024-06-25T16:27:03.003144843Z" level=info msg="StartContainer for \"cd2cd8ddee02899eb0d8a5ff68554705fb2fbea7e11ab1d03aecf72b367b2460\"" Jun 25 16:27:03.010270 containerd[1325]: time="2024-06-25T16:27:03.010218726Z" level=info msg="CreateContainer within sandbox \"f742df08b438746499736dc6474046a9b828a8197d1c3ea6bdf892f030b3ace7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f30f1489a089b0f00fe5f454be4013b37b6f7a9aa3281a717a07928ce2c8332c\"" Jun 25 16:27:03.011126 containerd[1325]: time="2024-06-25T16:27:03.011078719Z" level=info msg="StartContainer for \"f30f1489a089b0f00fe5f454be4013b37b6f7a9aa3281a717a07928ce2c8332c\"" Jun 25 16:27:03.019687 containerd[1325]: time="2024-06-25T16:27:03.019642610Z" level=info msg="CreateContainer within sandbox \"3f90f39ea3707129fdefbe59a3ca6e548f9abf1c25948f861792271309e3cc28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"049c4ffed2c5b7d92aa4406269b5f8cc2f1be0a9e4268d0772277eb2ee815e6b\"" Jun 25 16:27:03.021317 containerd[1325]: time="2024-06-25T16:27:03.021271517Z" level=info msg="StartContainer for \"049c4ffed2c5b7d92aa4406269b5f8cc2f1be0a9e4268d0772277eb2ee815e6b\"" Jun 25 16:27:03.138730 containerd[1325]: time="2024-06-25T16:27:03.138679013Z" level=info msg="StartContainer for \"f30f1489a089b0f00fe5f454be4013b37b6f7a9aa3281a717a07928ce2c8332c\" returns successfully" Jun 25 16:27:03.139821 containerd[1325]: time="2024-06-25T16:27:03.138683261Z" level=info msg="StartContainer for \"049c4ffed2c5b7d92aa4406269b5f8cc2f1be0a9e4268d0772277eb2ee815e6b\" returns successfully" Jun 25 16:27:03.152706 containerd[1325]: time="2024-06-25T16:27:03.151393291Z" level=info msg="StartContainer for \"cd2cd8ddee02899eb0d8a5ff68554705fb2fbea7e11ab1d03aecf72b367b2460\" returns successfully" Jun 25 16:27:03.235809 kubelet[2019]: E0625 16:27:03.235738 2019 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.182:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.182:6443: connect: connection refused Jun 25 16:27:04.435624 kubelet[2019]: I0625 16:27:04.435588 2019 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:06.528893 kubelet[2019]: E0625 16:27:06.528757 2019 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815-2-4-3-54e11b9a94.novalocal.17dc4c176cce6ccc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815-2-4-3-54e11b9a94.novalocal", UID:"ci-3815-2-4-3-54e11b9a94.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815-2-4-3-54e11b9a94.novalocal"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 27, 1, 242113228, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 27, 1, 242113228, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815-2-4-3-54e11b9a94.novalocal"}': 'namespaces "default" not found' (will not retry!) Jun 25 16:27:06.535096 kubelet[2019]: I0625 16:27:06.535065 2019 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:06.816521 kubelet[2019]: E0625 16:27:06.816226 2019 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815-2-4-3-54e11b9a94.novalocal.17dc4c176f2606b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815-2-4-3-54e11b9a94.novalocal", UID:"ci-3815-2-4-3-54e11b9a94.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815-2-4-3-54e11b9a94.novalocal"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 27, 1, 281408693, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 27, 1, 281408693, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815-2-4-3-54e11b9a94.novalocal"}': 'namespaces "default" not found' (will not retry!) Jun 25 16:27:06.817268 kubelet[2019]: E0625 16:27:06.816534 2019 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jun 25 16:27:07.202638 kubelet[2019]: I0625 16:27:07.202570 2019 apiserver.go:52] "Watching apiserver" Jun 25 16:27:07.273432 kubelet[2019]: I0625 16:27:07.273376 2019 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:27:07.495808 kubelet[2019]: W0625 16:27:07.495608 2019 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:27:07.619326 kubelet[2019]: W0625 16:27:07.619276 2019 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:27:10.421779 systemd[1]: Reloading. Jun 25 16:27:10.664408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:27:10.782213 kubelet[2019]: I0625 16:27:10.781845 2019 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:27:10.782177 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:27:10.800069 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:27:10.800541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:10.807150 kernel: kauditd_printk_skb: 31 callbacks suppressed Jun 25 16:27:10.807281 kernel: audit: type=1131 audit(1719332830.799:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:10.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:10.810939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:27:11.423952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:11.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:11.438097 kernel: audit: type=1130 audit(1719332831.427:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:11.545693 kubelet[2373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:27:11.545693 kubelet[2373]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:27:11.545693 kubelet[2373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:27:11.546411 kubelet[2373]: I0625 16:27:11.545727 2373 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:27:11.552288 kubelet[2373]: I0625 16:27:11.552246 2373 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:27:11.552490 kubelet[2373]: I0625 16:27:11.552474 2373 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:27:11.552957 kubelet[2373]: I0625 16:27:11.552939 2373 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:27:11.555898 kubelet[2373]: I0625 16:27:11.555861 2373 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:27:11.559244 kubelet[2373]: I0625 16:27:11.559218 2373 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:27:11.585966 kubelet[2373]: I0625 16:27:11.584761 2373 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:27:11.585966 kubelet[2373]: I0625 16:27:11.585206 2373 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:27:11.585966 kubelet[2373]: I0625 16:27:11.585430 2373 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:27:11.585966 kubelet[2373]: I0625 16:27:11.585451 2373 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:27:11.585966 kubelet[2373]: I0625 16:27:11.585464 2373 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:27:11.585966 kubelet[2373]: I0625 16:27:11.585509 2373 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:27:11.586358 kubelet[2373]: I0625 16:27:11.585611 2373 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:27:11.586358 kubelet[2373]: I0625 16:27:11.585627 2373 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:27:11.595200 kubelet[2373]: I0625 16:27:11.595024 2373 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:27:11.595200 kubelet[2373]: I0625 16:27:11.595058 2373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:27:11.597277 kubelet[2373]: I0625 16:27:11.596862 2373 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:27:11.598604 kubelet[2373]: I0625 16:27:11.598576 2373 server.go:1232] "Started kubelet" Jun 25 16:27:11.610452 kubelet[2373]: I0625 16:27:11.610426 2373 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:27:11.612573 kubelet[2373]: I0625 16:27:11.612538 2373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:27:11.613623 kubelet[2373]: I0625 16:27:11.613608 2373 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:27:11.615786 kubelet[2373]: I0625 16:27:11.615768 2373 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:27:11.616725 kubelet[2373]: I0625 16:27:11.616710 2373 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:27:11.622600 kubelet[2373]: I0625 16:27:11.622585 2373 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:27:11.622832 kubelet[2373]: I0625 16:27:11.622818 2373 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:27:11.623091 kubelet[2373]: I0625 16:27:11.623078 2373 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:27:11.633565 kubelet[2373]: E0625 16:27:11.633524 2373 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:27:11.633738 kubelet[2373]: E0625 16:27:11.633727 2373 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:27:11.716661 kubelet[2373]: I0625 16:27:11.714025 2373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:27:11.719648 kubelet[2373]: I0625 16:27:11.719628 2373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:27:11.719796 kubelet[2373]: I0625 16:27:11.719785 2373 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:27:11.719885 kubelet[2373]: I0625 16:27:11.719875 2373 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:27:11.720009 kubelet[2373]: E0625 16:27:11.719982 2373 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:27:11.733513 kubelet[2373]: I0625 16:27:11.733479 2373 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:11.744078 kubelet[2373]: I0625 16:27:11.744050 2373 kubelet_node_status.go:108] "Node was previously registered" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:11.744820 kubelet[2373]: I0625 16:27:11.744783 2373 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:11.821138 kubelet[2373]: E0625 16:27:11.821110 2373 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:27:11.829289 kubelet[2373]: I0625 16:27:11.829263 2373 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:27:11.829455 kubelet[2373]: I0625 16:27:11.829444 2373 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:27:11.829550 kubelet[2373]: I0625 16:27:11.829531 2373 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:27:11.829759 kubelet[2373]: I0625 16:27:11.829746 2373 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:27:11.829851 kubelet[2373]: I0625 16:27:11.829839 2373 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:27:11.829917 kubelet[2373]: I0625 16:27:11.829908 2373 policy_none.go:49] "None policy: Start" Jun 25 16:27:11.830696 kubelet[2373]: I0625 16:27:11.830682 2373 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:27:11.830812 kubelet[2373]: I0625 16:27:11.830801 2373 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:27:11.831153 kubelet[2373]: I0625 16:27:11.831140 2373 state_mem.go:75] "Updated machine memory state" Jun 25 16:27:11.832333 kubelet[2373]: I0625 16:27:11.832319 2373 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:27:11.835535 kubelet[2373]: I0625 16:27:11.835102 2373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:27:12.022829 kubelet[2373]: I0625 16:27:12.022357 2373 topology_manager.go:215] "Topology Admit Handler" podUID="d9cee256d7afdcdc36633fa79a33debb" podNamespace="kube-system" podName="kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.022829 kubelet[2373]: I0625 16:27:12.022483 2373 topology_manager.go:215] "Topology Admit Handler" podUID="df84e8824827e20f00fcfc13088b3361" podNamespace="kube-system" podName="kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.022829 kubelet[2373]: I0625 16:27:12.022521 2373 topology_manager.go:215] "Topology Admit Handler" podUID="924b59b0377ecf7d3216e9c9ee45cd4c" podNamespace="kube-system" podName="kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.030021 kubelet[2373]: I0625 16:27:12.029958 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.030228 kubelet[2373]: I0625 16:27:12.030067 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9cee256d7afdcdc36633fa79a33debb-ca-certs\") pod \"kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"d9cee256d7afdcdc36633fa79a33debb\") " pod="kube-system/kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.030228 kubelet[2373]: I0625 16:27:12.030124 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-ca-certs\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.030228 kubelet[2373]: I0625 16:27:12.030170 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-k8s-certs\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.030228 kubelet[2373]: I0625 16:27:12.030218 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-kubeconfig\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.030376 kubelet[2373]: I0625 16:27:12.030262 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9cee256d7afdcdc36633fa79a33debb-k8s-certs\") pod \"kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"d9cee256d7afdcdc36633fa79a33debb\") " pod="kube-system/kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.030376 kubelet[2373]: I0625 16:27:12.030312 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9cee256d7afdcdc36633fa79a33debb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"d9cee256d7afdcdc36633fa79a33debb\") " pod="kube-system/kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.030376 kubelet[2373]: I0625 16:27:12.030360 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df84e8824827e20f00fcfc13088b3361-flexvolume-dir\") pod \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"df84e8824827e20f00fcfc13088b3361\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.030478 kubelet[2373]: I0625 16:27:12.030408 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/924b59b0377ecf7d3216e9c9ee45cd4c-kubeconfig\") pod \"kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal\" (UID: \"924b59b0377ecf7d3216e9c9ee45cd4c\") " pod="kube-system/kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.033365 kubelet[2373]: W0625 16:27:12.033318 2373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:27:12.033625 kubelet[2373]: E0625 16:27:12.033610 2373 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.034126 kubelet[2373]: W0625 16:27:12.034114 2373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:27:12.046245 kubelet[2373]: W0625 16:27:12.046224 2373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:27:12.046596 kubelet[2373]: E0625 16:27:12.046580 2373 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:27:12.596307 kubelet[2373]: I0625 16:27:12.596245 2373 apiserver.go:52] "Watching apiserver" Jun 25 16:27:12.623058 kubelet[2373]: I0625 16:27:12.623001 2373 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:27:12.861450 kubelet[2373]: I0625 16:27:12.861286 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815-2-4-3-54e11b9a94.novalocal" podStartSLOduration=0.86119155 podCreationTimestamp="2024-06-25 16:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:27:12.830186585 +0000 UTC m=+1.373220690" watchObservedRunningTime="2024-06-25 16:27:12.86119155 +0000 UTC m=+1.404225705" Jun 25 16:27:12.903703 kubelet[2373]: I0625 16:27:12.903654 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815-2-4-3-54e11b9a94.novalocal" podStartSLOduration=5.903604161 podCreationTimestamp="2024-06-25 16:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:27:12.862345179 +0000 UTC m=+1.405379304" watchObservedRunningTime="2024-06-25 16:27:12.903604161 +0000 UTC m=+1.446638266" Jun 25 16:27:17.502568 sudo[1523]: pam_unix(sudo:session): session closed for user root Jun 25 16:27:17.501000 audit[1523]: USER_END pid=1523 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:17.510073 kernel: audit: type=1106 audit(1719332837.501:221): pid=1523 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:17.502000 audit[1523]: CRED_DISP pid=1523 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:17.519022 kernel: audit: type=1104 audit(1719332837.502:222): pid=1523 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:17.697739 sshd[1517]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:17.698000 audit[1517]: USER_END pid=1517 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:27:17.705079 kernel: audit: type=1106 audit(1719332837.698:223): pid=1517 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:27:17.705057 systemd[1]: sshd@6-172.24.4.182:22-172.24.4.1:58470.service: Deactivated successfully. Jun 25 16:27:17.698000 audit[1517]: CRED_DISP pid=1517 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:27:17.707190 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:27:17.708565 systemd-logind[1302]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:27:17.710124 kernel: audit: type=1104 audit(1719332837.698:224): pid=1517 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:27:17.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.182:22-172.24.4.1:58470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:17.715050 kernel: audit: type=1131 audit(1719332837.704:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.182:22-172.24.4.1:58470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:17.713139 systemd-logind[1302]: Removed session 7. Jun 25 16:27:19.119302 kubelet[2373]: I0625 16:27:19.119178 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815-2-4-3-54e11b9a94.novalocal" podStartSLOduration=12.119063354 podCreationTimestamp="2024-06-25 16:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:27:12.904717694 +0000 UTC m=+1.447751809" watchObservedRunningTime="2024-06-25 16:27:19.119063354 +0000 UTC m=+7.662097560" Jun 25 16:27:22.781014 kubelet[2373]: I0625 16:27:22.780969 2373 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:27:22.782044 containerd[1325]: time="2024-06-25T16:27:22.781855961Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:27:22.782809 kubelet[2373]: I0625 16:27:22.782758 2373 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:27:23.602164 kubelet[2373]: I0625 16:27:23.602114 2373 topology_manager.go:215] "Topology Admit Handler" podUID="e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f" podNamespace="kube-system" podName="kube-proxy-j8wv9" Jun 25 16:27:23.628242 kubelet[2373]: I0625 16:27:23.628208 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f-lib-modules\") pod \"kube-proxy-j8wv9\" (UID: \"e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f\") " pod="kube-system/kube-proxy-j8wv9" Jun 25 16:27:23.628473 kubelet[2373]: I0625 16:27:23.628460 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f-xtables-lock\") pod \"kube-proxy-j8wv9\" (UID: \"e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f\") " pod="kube-system/kube-proxy-j8wv9" Jun 25 16:27:23.628606 kubelet[2373]: I0625 16:27:23.628594 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcgx\" (UniqueName: \"kubernetes.io/projected/e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f-kube-api-access-nhcgx\") pod \"kube-proxy-j8wv9\" (UID: \"e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f\") " pod="kube-system/kube-proxy-j8wv9" Jun 25 16:27:23.628702 kubelet[2373]: I0625 16:27:23.628691 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f-kube-proxy\") pod \"kube-proxy-j8wv9\" (UID: \"e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f\") " pod="kube-system/kube-proxy-j8wv9" Jun 25 16:27:23.714073 kubelet[2373]: I0625 16:27:23.714032 2373 topology_manager.go:215] "Topology Admit Handler" podUID="61936ac3-6de3-4d07-b516-1c11dc3ee5a5" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-299ks" Jun 25 16:27:23.730845 kubelet[2373]: I0625 16:27:23.730819 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2st4\" (UniqueName: \"kubernetes.io/projected/61936ac3-6de3-4d07-b516-1c11dc3ee5a5-kube-api-access-h2st4\") pod \"tigera-operator-76c4974c85-299ks\" (UID: \"61936ac3-6de3-4d07-b516-1c11dc3ee5a5\") " pod="tigera-operator/tigera-operator-76c4974c85-299ks" Jun 25 16:27:23.731089 kubelet[2373]: I0625 16:27:23.731076 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61936ac3-6de3-4d07-b516-1c11dc3ee5a5-var-lib-calico\") pod \"tigera-operator-76c4974c85-299ks\" (UID: \"61936ac3-6de3-4d07-b516-1c11dc3ee5a5\") " pod="tigera-operator/tigera-operator-76c4974c85-299ks" Jun 25 16:27:23.924303 containerd[1325]: time="2024-06-25T16:27:23.924170760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8wv9,Uid:e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:23.982277 containerd[1325]: time="2024-06-25T16:27:23.981853594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:23.982754 containerd[1325]: time="2024-06-25T16:27:23.982561647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:23.983608 containerd[1325]: time="2024-06-25T16:27:23.982951074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:23.984062 containerd[1325]: time="2024-06-25T16:27:23.983898085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:24.018053 containerd[1325]: time="2024-06-25T16:27:24.017902578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-299ks,Uid:61936ac3-6de3-4d07-b516-1c11dc3ee5a5,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:27:24.075135 containerd[1325]: time="2024-06-25T16:27:24.075081263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8wv9,Uid:e2807cfe-a34e-4b4a-a18a-40c6b1fa0d5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1052c630d21cedfc7d43af91be9c3ffb8c5173d7339c3c32192e600ff0442cbd\"" Jun 25 16:27:24.075708 containerd[1325]: time="2024-06-25T16:27:24.075287478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:24.075708 containerd[1325]: time="2024-06-25T16:27:24.075367698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:24.075708 containerd[1325]: time="2024-06-25T16:27:24.075397744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:24.075708 containerd[1325]: time="2024-06-25T16:27:24.075530172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:24.078422 containerd[1325]: time="2024-06-25T16:27:24.078394046Z" level=info msg="CreateContainer within sandbox \"1052c630d21cedfc7d43af91be9c3ffb8c5173d7339c3c32192e600ff0442cbd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:27:24.103741 containerd[1325]: time="2024-06-25T16:27:24.102831632Z" level=info msg="CreateContainer within sandbox \"1052c630d21cedfc7d43af91be9c3ffb8c5173d7339c3c32192e600ff0442cbd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a90e04a3afdc50f4f200d853cee2e21b94eaa55ecb5c4a65a08a14fc4a3f735b\"" Jun 25 16:27:24.106481 containerd[1325]: time="2024-06-25T16:27:24.106452791Z" level=info msg="StartContainer for \"a90e04a3afdc50f4f200d853cee2e21b94eaa55ecb5c4a65a08a14fc4a3f735b\"" Jun 25 16:27:24.148803 containerd[1325]: time="2024-06-25T16:27:24.148762328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-299ks,Uid:61936ac3-6de3-4d07-b516-1c11dc3ee5a5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f0a8a4992fd8f54f0c14d5f53a675a8b17afc6060f800577c90bd379c03b7f08\"" Jun 25 16:27:24.158034 containerd[1325]: time="2024-06-25T16:27:24.153590094Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:27:24.187482 containerd[1325]: time="2024-06-25T16:27:24.187357090Z" level=info msg="StartContainer for \"a90e04a3afdc50f4f200d853cee2e21b94eaa55ecb5c4a65a08a14fc4a3f735b\" returns successfully" Jun 25 16:27:24.589000 audit[2587]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.598050 kernel: audit: type=1325 audit(1719332844.589:226): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.598260 kernel: audit: type=1300 audit(1719332844.589:226): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1d0853a0 a2=0 a3=7ffe1d08538c items=0 ppid=2548 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.589000 audit[2587]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1d0853a0 a2=0 a3=7ffe1d08538c items=0 ppid=2548 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:27:24.619123 kernel: audit: type=1327 audit(1719332844.589:226): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:27:24.602000 audit[2588]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.632056 kernel: audit: type=1325 audit(1719332844.602:227): table=nat:39 family=10 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.602000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedf6af8d0 a2=0 a3=7ffedf6af8bc items=0 ppid=2548 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.642022 kernel: audit: type=1300 audit(1719332844.602:227): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedf6af8d0 a2=0 a3=7ffedf6af8bc items=0 ppid=2548 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.602000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:27:24.646022 kernel: audit: type=1327 audit(1719332844.602:227): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:27:24.607000 audit[2589]: NETFILTER_CFG table=mangle:40 family=2 entries=1 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.651030 kernel: audit: type=1325 audit(1719332844.607:228): table=mangle:40 family=2 entries=1 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.607000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec32722d0 a2=0 a3=7ffec32722bc items=0 ppid=2548 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.657217 kernel: audit: type=1300 audit(1719332844.607:228): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec32722d0 a2=0 a3=7ffec32722bc items=0 ppid=2548 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:27:24.661032 kernel: audit: type=1327 audit(1719332844.607:228): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:27:24.612000 audit[2590]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.665022 kernel: audit: type=1325 audit(1719332844.612:229): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.612000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe449a2e40 a2=0 a3=7ffe449a2e2c items=0 ppid=2548 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.612000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:27:24.625000 audit[2591]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.625000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff39654320 a2=0 a3=7fff3965430c items=0 ppid=2548 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:27:24.637000 audit[2592]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.637000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc5abcc70 a2=0 a3=7ffdc5abcc5c items=0 ppid=2548 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:27:24.697000 audit[2593]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.697000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff844bd1d0 a2=0 a3=7fff844bd1bc items=0 ppid=2548 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:27:24.702000 audit[2595]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.702000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdbfe00020 a2=0 a3=7ffdbfe0000c items=0 ppid=2548 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:27:24.706000 audit[2598]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.706000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd92c93340 a2=0 a3=7ffd92c9332c items=0 ppid=2548 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:27:24.707000 audit[2599]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.707000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff661b62f0 a2=0 a3=7fff661b62dc items=0 ppid=2548 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:27:24.712000 audit[2601]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.712000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb61d1200 a2=0 a3=7ffcb61d11ec items=0 ppid=2548 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:27:24.714000 audit[2602]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.714000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca32ff180 a2=0 a3=7ffca32ff16c items=0 ppid=2548 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:27:24.717000 audit[2604]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.717000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffd4fbed50 a2=0 a3=7fffd4fbed3c items=0 ppid=2548 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:27:24.721000 audit[2607]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.721000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe35714ff0 a2=0 a3=7ffe35714fdc items=0 ppid=2548 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:27:24.722000 audit[2608]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.722000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6efaffe0 a2=0 a3=7ffe6efaffcc items=0 ppid=2548 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:27:24.725000 audit[2610]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.725000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe69aa0a20 a2=0 a3=7ffe69aa0a0c items=0 ppid=2548 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:27:24.726000 audit[2611]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.726000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9e572b90 a2=0 a3=7ffc9e572b7c items=0 ppid=2548 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:27:24.729000 audit[2613]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.729000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9f1f6600 a2=0 a3=7fff9f1f65ec items=0 ppid=2548 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:27:24.734000 audit[2616]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.734000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9db87ae0 a2=0 a3=7ffe9db87acc items=0 ppid=2548 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:27:24.739000 audit[2619]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.739000 audit[2619]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1afa69e0 a2=0 a3=7ffe1afa69cc items=0 ppid=2548 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:27:24.741000 audit[2620]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.741000 audit[2620]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff431c8650 a2=0 a3=7fff431c863c items=0 ppid=2548 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:27:24.745000 audit[2622]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.745000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc70427f30 a2=0 a3=7ffc70427f1c items=0 ppid=2548 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:27:24.749000 audit[2625]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.749000 audit[2625]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd232180c0 a2=0 a3=7ffd232180ac items=0 ppid=2548 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:27:24.750000 audit[2626]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.750000 audit[2626]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc99a142c0 a2=0 a3=7ffc99a142ac items=0 ppid=2548 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:27:24.753000 audit[2628]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:24.753000 audit[2628]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffb6dd9fe0 a2=0 a3=7fffb6dd9fcc items=0 ppid=2548 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:27:24.760878 systemd[1]: run-containerd-runc-k8s.io-1052c630d21cedfc7d43af91be9c3ffb8c5173d7339c3c32192e600ff0442cbd-runc.hAOOvr.mount: Deactivated successfully. Jun 25 16:27:24.786000 audit[2634]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:24.786000 audit[2634]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffda20d5b70 a2=0 a3=7ffda20d5b5c items=0 ppid=2548 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:24.796000 audit[2634]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:24.796000 audit[2634]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffda20d5b70 a2=0 a3=7ffda20d5b5c items=0 ppid=2548 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:24.799000 audit[2640]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2640 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.799000 audit[2640]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd50718070 a2=0 a3=7ffd5071805c items=0 ppid=2548 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:27:24.802000 audit[2642]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2642 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.802000 audit[2642]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdbcc2d590 a2=0 a3=7ffdbcc2d57c items=0 ppid=2548 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:27:24.807000 audit[2645]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.807000 audit[2645]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff8a801930 a2=0 a3=7fff8a80191c items=0 ppid=2548 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:27:24.808000 audit[2646]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2646 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.808000 audit[2646]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc09781b50 a2=0 a3=7ffc09781b3c items=0 ppid=2548 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:27:24.811000 audit[2648]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2648 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.811000 audit[2648]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8802c6d0 a2=0 a3=7fff8802c6bc items=0 ppid=2548 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:27:24.815000 audit[2649]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.815000 audit[2649]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd2d062b0 a2=0 a3=7ffcd2d0629c items=0 ppid=2548 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.815000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:27:24.821000 audit[2651]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2651 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.821000 audit[2651]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd56303340 a2=0 a3=7ffd5630332c items=0 ppid=2548 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:27:24.834257 kubelet[2373]: I0625 16:27:24.834203 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j8wv9" podStartSLOduration=1.8341249689999999 podCreationTimestamp="2024-06-25 16:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:27:24.833772469 +0000 UTC m=+13.376806605" watchObservedRunningTime="2024-06-25 16:27:24.834124969 +0000 UTC m=+13.377159084" Jun 25 16:27:24.842000 audit[2654]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2654 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.842000 audit[2654]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffd3dc8b10 a2=0 a3=7fffd3dc8afc items=0 ppid=2548 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:27:24.843000 audit[2655]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2655 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.843000 audit[2655]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2a5048f0 a2=0 a3=7ffc2a5048dc items=0 ppid=2548 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:27:24.850000 audit[2657]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.850000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1d2f2100 a2=0 a3=7fff1d2f20ec items=0 ppid=2548 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:27:24.855000 audit[2658]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2658 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.855000 audit[2658]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9a4e9180 a2=0 a3=7ffc9a4e916c items=0 ppid=2548 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:27:24.858000 audit[2660]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2660 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.858000 audit[2660]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc1101cbc0 a2=0 a3=7ffc1101cbac items=0 ppid=2548 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:27:24.862000 audit[2663]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2663 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.862000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9e146800 a2=0 a3=7ffc9e1467ec items=0 ppid=2548 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:27:24.867000 audit[2666]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2666 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.867000 audit[2666]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe23597f60 a2=0 a3=7ffe23597f4c items=0 ppid=2548 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:27:24.868000 audit[2667]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2667 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.868000 audit[2667]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc57eeb710 a2=0 a3=7ffc57eeb6fc items=0 ppid=2548 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:27:24.871000 audit[2669]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.871000 audit[2669]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd693fef20 a2=0 a3=7ffd693fef0c items=0 ppid=2548 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:27:24.875000 audit[2672]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2672 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.875000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc57c05620 a2=0 a3=7ffc57c0560c items=0 ppid=2548 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.875000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:27:24.876000 audit[2673]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2673 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.876000 audit[2673]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb928a2a0 a2=0 a3=7ffeb928a28c items=0 ppid=2548 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:27:24.879000 audit[2675]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2675 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.879000 audit[2675]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcbc335110 a2=0 a3=7ffcbc3350fc items=0 ppid=2548 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:27:24.880000 audit[2676]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2676 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.880000 audit[2676]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda1cb3e90 a2=0 a3=7ffda1cb3e7c items=0 ppid=2548 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:27:24.883000 audit[2678]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2678 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.883000 audit[2678]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc97052150 a2=0 a3=7ffc9705213c items=0 ppid=2548 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:27:24.887000 audit[2681]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2681 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:24.887000 audit[2681]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffebba9bce0 a2=0 a3=7ffebba9bccc items=0 ppid=2548 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:27:24.891000 audit[2683]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2683 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:27:24.891000 audit[2683]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffdb93ee6b0 a2=0 a3=7ffdb93ee69c items=0 ppid=2548 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.891000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:24.891000 audit[2683]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2683 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:27:24.891000 audit[2683]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdb93ee6b0 a2=0 a3=7ffdb93ee69c items=0 ppid=2548 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.891000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:25.822156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2398513117.mount: Deactivated successfully. Jun 25 16:27:26.637634 containerd[1325]: time="2024-06-25T16:27:26.637542948Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:26.639121 containerd[1325]: time="2024-06-25T16:27:26.639048985Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076052" Jun 25 16:27:26.640382 containerd[1325]: time="2024-06-25T16:27:26.640343295Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:26.643119 containerd[1325]: time="2024-06-25T16:27:26.643085093Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:26.645307 containerd[1325]: time="2024-06-25T16:27:26.645283886Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:26.647614 containerd[1325]: time="2024-06-25T16:27:26.647586661Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.493929823s" Jun 25 16:27:26.647731 containerd[1325]: time="2024-06-25T16:27:26.647711305Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:27:26.652385 containerd[1325]: time="2024-06-25T16:27:26.652327398Z" level=info msg="CreateContainer within sandbox \"f0a8a4992fd8f54f0c14d5f53a675a8b17afc6060f800577c90bd379c03b7f08\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:27:26.668774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556568001.mount: Deactivated successfully. Jun 25 16:27:26.681395 containerd[1325]: time="2024-06-25T16:27:26.681324288Z" level=info msg="CreateContainer within sandbox \"f0a8a4992fd8f54f0c14d5f53a675a8b17afc6060f800577c90bd379c03b7f08\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"508942bb9d66208db546b57aa4de06ff8ce97612f2ff0b0a4c20352d1c9829e3\"" Jun 25 16:27:26.683789 containerd[1325]: time="2024-06-25T16:27:26.682511248Z" level=info msg="StartContainer for \"508942bb9d66208db546b57aa4de06ff8ce97612f2ff0b0a4c20352d1c9829e3\"" Jun 25 16:27:26.756950 containerd[1325]: time="2024-06-25T16:27:26.756890132Z" level=info msg="StartContainer for \"508942bb9d66208db546b57aa4de06ff8ce97612f2ff0b0a4c20352d1c9829e3\" returns successfully" Jun 25 16:27:26.845313 kubelet[2373]: I0625 16:27:26.845136 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-299ks" podStartSLOduration=1.349748951 podCreationTimestamp="2024-06-25 16:27:23 +0000 UTC" firstStartedPulling="2024-06-25 16:27:24.15281277 +0000 UTC m=+12.695846875" lastFinishedPulling="2024-06-25 16:27:26.648147681 +0000 UTC m=+15.191181786" observedRunningTime="2024-06-25 16:27:26.844978566 +0000 UTC m=+15.388012681" watchObservedRunningTime="2024-06-25 16:27:26.845083862 +0000 UTC m=+15.388117977" Jun 25 16:27:29.878000 audit[2732]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:29.882602 kernel: kauditd_printk_skb: 143 callbacks suppressed Jun 25 16:27:29.882729 kernel: audit: type=1325 audit(1719332849.878:277): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:29.878000 audit[2732]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff42067b70 a2=0 a3=7fff42067b5c items=0 ppid=2548 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.892030 kernel: audit: type=1300 audit(1719332849.878:277): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff42067b70 a2=0 a3=7fff42067b5c items=0 ppid=2548 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:29.898046 kernel: audit: type=1327 audit(1719332849.878:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:29.894000 audit[2732]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:29.902173 kernel: audit: type=1325 audit(1719332849.894:278): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:29.894000 audit[2732]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff42067b70 a2=0 a3=0 items=0 ppid=2548 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.908114 kernel: audit: type=1300 audit(1719332849.894:278): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff42067b70 a2=0 a3=0 items=0 ppid=2548 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:29.911017 kernel: audit: type=1327 audit(1719332849.894:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:29.914000 audit[2734]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:29.918014 kernel: audit: type=1325 audit(1719332849.914:279): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:29.914000 audit[2734]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffda29a2080 a2=0 a3=7ffda29a206c items=0 ppid=2548 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:29.926443 kernel: audit: type=1300 audit(1719332849.914:279): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffda29a2080 a2=0 a3=7ffda29a206c items=0 ppid=2548 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.926509 kernel: audit: type=1327 audit(1719332849.914:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:29.915000 audit[2734]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:29.933012 kernel: audit: type=1325 audit(1719332849.915:280): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:29.915000 audit[2734]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda29a2080 a2=0 a3=0 items=0 ppid=2548 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.915000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:30.029546 kubelet[2373]: I0625 16:27:30.029500 2373 topology_manager.go:215] "Topology Admit Handler" podUID="e9c1a55c-8a5d-4f77-9887-d30a381401e1" podNamespace="calico-system" podName="calico-typha-975958495-7bbh9" Jun 25 16:27:30.138234 kubelet[2373]: I0625 16:27:30.138119 2373 topology_manager.go:215] "Topology Admit Handler" podUID="b32e6b0d-6943-41a4-a5c4-da0a417e0756" podNamespace="calico-system" podName="calico-node-mf4w5" Jun 25 16:27:30.171501 kubelet[2373]: I0625 16:27:30.171459 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e9c1a55c-8a5d-4f77-9887-d30a381401e1-typha-certs\") pod \"calico-typha-975958495-7bbh9\" (UID: \"e9c1a55c-8a5d-4f77-9887-d30a381401e1\") " pod="calico-system/calico-typha-975958495-7bbh9" Jun 25 16:27:30.171710 kubelet[2373]: I0625 16:27:30.171515 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc2vr\" (UniqueName: \"kubernetes.io/projected/e9c1a55c-8a5d-4f77-9887-d30a381401e1-kube-api-access-cc2vr\") pod \"calico-typha-975958495-7bbh9\" (UID: \"e9c1a55c-8a5d-4f77-9887-d30a381401e1\") " pod="calico-system/calico-typha-975958495-7bbh9" Jun 25 16:27:30.171710 kubelet[2373]: I0625 16:27:30.171547 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9c1a55c-8a5d-4f77-9887-d30a381401e1-tigera-ca-bundle\") pod \"calico-typha-975958495-7bbh9\" (UID: \"e9c1a55c-8a5d-4f77-9887-d30a381401e1\") " pod="calico-system/calico-typha-975958495-7bbh9" Jun 25 16:27:30.272295 kubelet[2373]: I0625 16:27:30.272213 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b32e6b0d-6943-41a4-a5c4-da0a417e0756-node-certs\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.272492 kubelet[2373]: I0625 16:27:30.272333 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b32e6b0d-6943-41a4-a5c4-da0a417e0756-policysync\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.272492 kubelet[2373]: I0625 16:27:30.272454 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtvp\" (UniqueName: \"kubernetes.io/projected/b32e6b0d-6943-41a4-a5c4-da0a417e0756-kube-api-access-cxtvp\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.272586 kubelet[2373]: I0625 16:27:30.272516 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32e6b0d-6943-41a4-a5c4-da0a417e0756-tigera-ca-bundle\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.272698 kubelet[2373]: I0625 16:27:30.272663 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b32e6b0d-6943-41a4-a5c4-da0a417e0756-cni-log-dir\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.272764 kubelet[2373]: I0625 16:27:30.272733 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b32e6b0d-6943-41a4-a5c4-da0a417e0756-flexvol-driver-host\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.272799 kubelet[2373]: I0625 16:27:30.272791 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b32e6b0d-6943-41a4-a5c4-da0a417e0756-cni-net-dir\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.272864 kubelet[2373]: I0625 16:27:30.272846 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b32e6b0d-6943-41a4-a5c4-da0a417e0756-var-run-calico\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.272937 kubelet[2373]: I0625 16:27:30.272911 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b32e6b0d-6943-41a4-a5c4-da0a417e0756-lib-modules\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.273051 kubelet[2373]: I0625 16:27:30.273026 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b32e6b0d-6943-41a4-a5c4-da0a417e0756-xtables-lock\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.273119 kubelet[2373]: I0625 16:27:30.273099 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b32e6b0d-6943-41a4-a5c4-da0a417e0756-var-lib-calico\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.273192 kubelet[2373]: I0625 16:27:30.273171 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b32e6b0d-6943-41a4-a5c4-da0a417e0756-cni-bin-dir\") pod \"calico-node-mf4w5\" (UID: \"b32e6b0d-6943-41a4-a5c4-da0a417e0756\") " pod="calico-system/calico-node-mf4w5" Jun 25 16:27:30.290205 kubelet[2373]: I0625 16:27:30.290173 2373 topology_manager.go:215] "Topology Admit Handler" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" podNamespace="calico-system" podName="csi-node-driver-xtzcb" Jun 25 16:27:30.290706 kubelet[2373]: E0625 16:27:30.290687 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:30.337080 containerd[1325]: time="2024-06-25T16:27:30.336968936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-975958495-7bbh9,Uid:e9c1a55c-8a5d-4f77-9887-d30a381401e1,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:30.374048 kubelet[2373]: I0625 16:27:30.373975 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bca2029-8200-41b2-a394-21e05e7bb1ca-socket-dir\") pod \"csi-node-driver-xtzcb\" (UID: \"6bca2029-8200-41b2-a394-21e05e7bb1ca\") " pod="calico-system/csi-node-driver-xtzcb" Jun 25 16:27:30.374048 kubelet[2373]: I0625 16:27:30.374061 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbzb\" (UniqueName: \"kubernetes.io/projected/6bca2029-8200-41b2-a394-21e05e7bb1ca-kube-api-access-zgbzb\") pod \"csi-node-driver-xtzcb\" (UID: \"6bca2029-8200-41b2-a394-21e05e7bb1ca\") " pod="calico-system/csi-node-driver-xtzcb" Jun 25 16:27:30.374315 kubelet[2373]: I0625 16:27:30.374194 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6bca2029-8200-41b2-a394-21e05e7bb1ca-varrun\") pod \"csi-node-driver-xtzcb\" (UID: \"6bca2029-8200-41b2-a394-21e05e7bb1ca\") " pod="calico-system/csi-node-driver-xtzcb" Jun 25 16:27:30.374315 kubelet[2373]: I0625 16:27:30.374243 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bca2029-8200-41b2-a394-21e05e7bb1ca-kubelet-dir\") pod \"csi-node-driver-xtzcb\" (UID: \"6bca2029-8200-41b2-a394-21e05e7bb1ca\") " pod="calico-system/csi-node-driver-xtzcb" Jun 25 16:27:30.374406 kubelet[2373]: I0625 16:27:30.374370 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bca2029-8200-41b2-a394-21e05e7bb1ca-registration-dir\") pod \"csi-node-driver-xtzcb\" (UID: \"6bca2029-8200-41b2-a394-21e05e7bb1ca\") " pod="calico-system/csi-node-driver-xtzcb" Jun 25 16:27:30.387737 kubelet[2373]: E0625 16:27:30.383689 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.387737 kubelet[2373]: W0625 16:27:30.383733 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.387737 kubelet[2373]: E0625 16:27:30.383783 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.387737 kubelet[2373]: E0625 16:27:30.384435 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.387737 kubelet[2373]: W0625 16:27:30.384445 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.387737 kubelet[2373]: E0625 16:27:30.384472 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.393120 containerd[1325]: time="2024-06-25T16:27:30.392389180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:30.393120 containerd[1325]: time="2024-06-25T16:27:30.392534953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:30.393120 containerd[1325]: time="2024-06-25T16:27:30.392613861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:30.393120 containerd[1325]: time="2024-06-25T16:27:30.392741961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:30.425061 kubelet[2373]: E0625 16:27:30.423703 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.425061 kubelet[2373]: W0625 16:27:30.423725 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.425061 kubelet[2373]: E0625 16:27:30.423758 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.466211 containerd[1325]: time="2024-06-25T16:27:30.466152492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mf4w5,Uid:b32e6b0d-6943-41a4-a5c4-da0a417e0756,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:30.475106 kubelet[2373]: E0625 16:27:30.475062 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.475106 kubelet[2373]: W0625 16:27:30.475091 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.475339 kubelet[2373]: E0625 16:27:30.475122 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.475463 kubelet[2373]: E0625 16:27:30.475441 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.475463 kubelet[2373]: W0625 16:27:30.475458 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.475561 kubelet[2373]: E0625 16:27:30.475487 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.475747 kubelet[2373]: E0625 16:27:30.475726 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.475747 kubelet[2373]: W0625 16:27:30.475743 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.475818 kubelet[2373]: E0625 16:27:30.475771 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.476021 kubelet[2373]: E0625 16:27:30.475980 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.476021 kubelet[2373]: W0625 16:27:30.476012 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.476096 kubelet[2373]: E0625 16:27:30.476029 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.476277 kubelet[2373]: E0625 16:27:30.476257 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.476277 kubelet[2373]: W0625 16:27:30.476272 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.476350 kubelet[2373]: E0625 16:27:30.476301 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.476533 kubelet[2373]: E0625 16:27:30.476514 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.476533 kubelet[2373]: W0625 16:27:30.476530 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.476844 kubelet[2373]: E0625 16:27:30.476660 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.476844 kubelet[2373]: E0625 16:27:30.476832 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.476844 kubelet[2373]: W0625 16:27:30.476842 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.476949 kubelet[2373]: E0625 16:27:30.476918 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.481036 kubelet[2373]: E0625 16:27:30.481002 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.481036 kubelet[2373]: W0625 16:27:30.481026 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.481334 kubelet[2373]: E0625 16:27:30.481220 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.481386 kubelet[2373]: E0625 16:27:30.481361 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.481386 kubelet[2373]: W0625 16:27:30.481371 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.483052 kubelet[2373]: E0625 16:27:30.483027 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.484519 kubelet[2373]: E0625 16:27:30.484495 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.484519 kubelet[2373]: W0625 16:27:30.484513 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.484710 kubelet[2373]: E0625 16:27:30.484612 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.484970 kubelet[2373]: E0625 16:27:30.484949 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.484970 kubelet[2373]: W0625 16:27:30.484965 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.485151 kubelet[2373]: E0625 16:27:30.485081 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.485201 kubelet[2373]: E0625 16:27:30.485189 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.485234 kubelet[2373]: W0625 16:27:30.485226 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.485416 kubelet[2373]: E0625 16:27:30.485317 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.485499 kubelet[2373]: E0625 16:27:30.485454 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.485499 kubelet[2373]: W0625 16:27:30.485465 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.486016 kubelet[2373]: E0625 16:27:30.485566 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.486016 kubelet[2373]: E0625 16:27:30.485725 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.486016 kubelet[2373]: W0625 16:27:30.485734 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.486016 kubelet[2373]: E0625 16:27:30.485809 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.486016 kubelet[2373]: E0625 16:27:30.485908 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.486016 kubelet[2373]: W0625 16:27:30.485916 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.486016 kubelet[2373]: E0625 16:27:30.486011 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.486221 kubelet[2373]: E0625 16:27:30.486111 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.486221 kubelet[2373]: W0625 16:27:30.486121 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.486221 kubelet[2373]: E0625 16:27:30.486138 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.487026 kubelet[2373]: E0625 16:27:30.486380 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.487026 kubelet[2373]: W0625 16:27:30.486396 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.487026 kubelet[2373]: E0625 16:27:30.486469 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.487026 kubelet[2373]: E0625 16:27:30.486699 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.487026 kubelet[2373]: W0625 16:27:30.486710 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.487026 kubelet[2373]: E0625 16:27:30.486808 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.487261 kubelet[2373]: E0625 16:27:30.487032 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.487261 kubelet[2373]: W0625 16:27:30.487043 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.487706 kubelet[2373]: E0625 16:27:30.487653 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.492505 kubelet[2373]: E0625 16:27:30.492379 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.492505 kubelet[2373]: W0625 16:27:30.492497 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.492676 kubelet[2373]: E0625 16:27:30.492639 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.492859 kubelet[2373]: E0625 16:27:30.492836 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.492859 kubelet[2373]: W0625 16:27:30.492850 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.496018 kubelet[2373]: E0625 16:27:30.492962 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.496018 kubelet[2373]: E0625 16:27:30.493139 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.496018 kubelet[2373]: W0625 16:27:30.493148 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.496018 kubelet[2373]: E0625 16:27:30.493200 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.496018 kubelet[2373]: E0625 16:27:30.493421 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.496018 kubelet[2373]: W0625 16:27:30.493457 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.496018 kubelet[2373]: E0625 16:27:30.493476 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.496018 kubelet[2373]: E0625 16:27:30.494098 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.496018 kubelet[2373]: W0625 16:27:30.494109 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.496018 kubelet[2373]: E0625 16:27:30.494126 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.496365 kubelet[2373]: E0625 16:27:30.494358 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.496365 kubelet[2373]: W0625 16:27:30.494368 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.496365 kubelet[2373]: E0625 16:27:30.494381 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.513431 kubelet[2373]: E0625 16:27:30.513388 2373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:30.513431 kubelet[2373]: W0625 16:27:30.513418 2373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:30.513431 kubelet[2373]: E0625 16:27:30.513448 2373 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:30.543335 containerd[1325]: time="2024-06-25T16:27:30.543250594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:30.543583 containerd[1325]: time="2024-06-25T16:27:30.543554974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:30.543712 containerd[1325]: time="2024-06-25T16:27:30.543684927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:30.543826 containerd[1325]: time="2024-06-25T16:27:30.543801265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:30.647064 containerd[1325]: time="2024-06-25T16:27:30.646815036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mf4w5,Uid:b32e6b0d-6943-41a4-a5c4-da0a417e0756,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8659c4a44a558274eb08a9c70dbd3d1cebce246f88abceb1f4d173cf22b124c\"" Jun 25 16:27:30.672013 containerd[1325]: time="2024-06-25T16:27:30.671928874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:27:30.676260 containerd[1325]: time="2024-06-25T16:27:30.676222339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-975958495-7bbh9,Uid:e9c1a55c-8a5d-4f77-9887-d30a381401e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"88872bae90b6f7b58a308695dab2282cf7bf934332364f2ee989d26a5f044b5d\"" Jun 25 16:27:30.943000 audit[2853]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:30.943000 audit[2853]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc36b1bf50 a2=0 a3=7ffc36b1bf3c items=0 ppid=2548 pid=2853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:30.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:30.945000 audit[2853]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:30.945000 audit[2853]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc36b1bf50 a2=0 a3=0 items=0 ppid=2548 pid=2853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:30.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:31.729408 kubelet[2373]: E0625 16:27:31.729369 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:32.533470 containerd[1325]: time="2024-06-25T16:27:32.533409468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.535215 containerd[1325]: time="2024-06-25T16:27:32.535141651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:27:32.536304 containerd[1325]: time="2024-06-25T16:27:32.536266126Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.538575 containerd[1325]: time="2024-06-25T16:27:32.538464471Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.540070 containerd[1325]: time="2024-06-25T16:27:32.540046522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.540842 containerd[1325]: time="2024-06-25T16:27:32.540792980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.868450541s" Jun 25 16:27:32.540908 containerd[1325]: time="2024-06-25T16:27:32.540842953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:27:32.543324 containerd[1325]: time="2024-06-25T16:27:32.541782221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:27:32.548366 containerd[1325]: time="2024-06-25T16:27:32.548332273Z" level=info msg="CreateContainer within sandbox \"d8659c4a44a558274eb08a9c70dbd3d1cebce246f88abceb1f4d173cf22b124c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:27:32.589935 containerd[1325]: time="2024-06-25T16:27:32.589872912Z" level=info msg="CreateContainer within sandbox \"d8659c4a44a558274eb08a9c70dbd3d1cebce246f88abceb1f4d173cf22b124c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f0d6c715a9c651a2e49179cfed1f4b8641fbc28082961005a3155e0ab6dee2ad\"" Jun 25 16:27:32.592163 containerd[1325]: time="2024-06-25T16:27:32.591091814Z" level=info msg="StartContainer for \"f0d6c715a9c651a2e49179cfed1f4b8641fbc28082961005a3155e0ab6dee2ad\"" Jun 25 16:27:32.631184 systemd[1]: run-containerd-runc-k8s.io-f0d6c715a9c651a2e49179cfed1f4b8641fbc28082961005a3155e0ab6dee2ad-runc.0BUsyC.mount: Deactivated successfully. Jun 25 16:27:32.731569 containerd[1325]: time="2024-06-25T16:27:32.731502820Z" level=info msg="StartContainer for \"f0d6c715a9c651a2e49179cfed1f4b8641fbc28082961005a3155e0ab6dee2ad\" returns successfully" Jun 25 16:27:32.766914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0d6c715a9c651a2e49179cfed1f4b8641fbc28082961005a3155e0ab6dee2ad-rootfs.mount: Deactivated successfully. Jun 25 16:27:32.803815 containerd[1325]: time="2024-06-25T16:27:32.803642258Z" level=info msg="shim disconnected" id=f0d6c715a9c651a2e49179cfed1f4b8641fbc28082961005a3155e0ab6dee2ad namespace=k8s.io Jun 25 16:27:32.803815 containerd[1325]: time="2024-06-25T16:27:32.803710556Z" level=warning msg="cleaning up after shim disconnected" id=f0d6c715a9c651a2e49179cfed1f4b8641fbc28082961005a3155e0ab6dee2ad namespace=k8s.io Jun 25 16:27:32.803815 containerd[1325]: time="2024-06-25T16:27:32.803722749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:27:33.720835 kubelet[2373]: E0625 16:27:33.720754 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:35.721257 kubelet[2373]: E0625 16:27:35.721205 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:37.724968 kubelet[2373]: E0625 16:27:37.724858 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:39.128958 containerd[1325]: time="2024-06-25T16:27:39.128898436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:39.131469 containerd[1325]: time="2024-06-25T16:27:39.131425940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:27:39.132938 containerd[1325]: time="2024-06-25T16:27:39.132912976Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:39.136289 containerd[1325]: time="2024-06-25T16:27:39.136260927Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:39.138904 containerd[1325]: time="2024-06-25T16:27:39.138878481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:39.140749 containerd[1325]: time="2024-06-25T16:27:39.140707166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 6.598886373s" Jun 25 16:27:39.140865 containerd[1325]: time="2024-06-25T16:27:39.140843412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:27:39.143344 containerd[1325]: time="2024-06-25T16:27:39.143279104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:27:39.164457 containerd[1325]: time="2024-06-25T16:27:39.164416209Z" level=info msg="CreateContainer within sandbox \"88872bae90b6f7b58a308695dab2282cf7bf934332364f2ee989d26a5f044b5d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:27:39.188257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661657162.mount: Deactivated successfully. Jun 25 16:27:39.200585 containerd[1325]: time="2024-06-25T16:27:39.200524086Z" level=info msg="CreateContainer within sandbox \"88872bae90b6f7b58a308695dab2282cf7bf934332364f2ee989d26a5f044b5d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"341eb28fea2d56a615fcfbc7caa4e0d04005a455cff5ca12e30b99233c59671a\"" Jun 25 16:27:39.201486 containerd[1325]: time="2024-06-25T16:27:39.201436275Z" level=info msg="StartContainer for \"341eb28fea2d56a615fcfbc7caa4e0d04005a455cff5ca12e30b99233c59671a\"" Jun 25 16:27:39.325937 containerd[1325]: time="2024-06-25T16:27:39.325891797Z" level=info msg="StartContainer for \"341eb28fea2d56a615fcfbc7caa4e0d04005a455cff5ca12e30b99233c59671a\" returns successfully" Jun 25 16:27:39.722035 kubelet[2373]: E0625 16:27:39.721998 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:39.954110 kubelet[2373]: I0625 16:27:39.953967 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-975958495-7bbh9" podStartSLOduration=1.493223738 podCreationTimestamp="2024-06-25 16:27:30 +0000 UTC" firstStartedPulling="2024-06-25 16:27:30.680760001 +0000 UTC m=+19.223794106" lastFinishedPulling="2024-06-25 16:27:39.141378274 +0000 UTC m=+27.684412379" observedRunningTime="2024-06-25 16:27:39.923461532 +0000 UTC m=+28.466495758" watchObservedRunningTime="2024-06-25 16:27:39.953842011 +0000 UTC m=+28.496876176" Jun 25 16:27:39.990000 audit[2979]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:39.994249 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 16:27:39.994316 kernel: audit: type=1325 audit(1719332859.990:283): table=filter:95 family=2 entries=15 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:39.990000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc462a3d80 a2=0 a3=7ffc462a3d6c items=0 ppid=2548 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.004020 kernel: audit: type=1300 audit(1719332859.990:283): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc462a3d80 a2=0 a3=7ffc462a3d6c items=0 ppid=2548 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:39.990000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:40.010039 kernel: audit: type=1327 audit(1719332859.990:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:40.005000 audit[2979]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:40.005000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc462a3d80 a2=0 a3=7ffc462a3d6c items=0 ppid=2548 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.029155 kernel: audit: type=1325 audit(1719332860.005:284): table=nat:96 family=2 entries=19 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:40.029219 kernel: audit: type=1300 audit(1719332860.005:284): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc462a3d80 a2=0 a3=7ffc462a3d6c items=0 ppid=2548 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:40.038021 kernel: audit: type=1327 audit(1719332860.005:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:41.722514 kubelet[2373]: E0625 16:27:41.721579 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:43.723516 kubelet[2373]: E0625 16:27:43.720416 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:45.721933 kubelet[2373]: E0625 16:27:45.721085 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:47.728630 kubelet[2373]: E0625 16:27:47.722268 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:49.720589 kubelet[2373]: E0625 16:27:49.720292 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:50.523681 containerd[1325]: time="2024-06-25T16:27:50.523615501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:50.527562 containerd[1325]: time="2024-06-25T16:27:50.527436549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:27:50.539114 containerd[1325]: time="2024-06-25T16:27:50.539057513Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:50.542077 containerd[1325]: time="2024-06-25T16:27:50.542042870Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:50.544049 containerd[1325]: time="2024-06-25T16:27:50.544017054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:50.545054 containerd[1325]: time="2024-06-25T16:27:50.544975117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 11.401633706s" Jun 25 16:27:50.545159 containerd[1325]: time="2024-06-25T16:27:50.545135621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:27:50.549529 containerd[1325]: time="2024-06-25T16:27:50.549475481Z" level=info msg="CreateContainer within sandbox \"d8659c4a44a558274eb08a9c70dbd3d1cebce246f88abceb1f4d173cf22b124c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:27:50.634815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187953184.mount: Deactivated successfully. Jun 25 16:27:50.645790 containerd[1325]: time="2024-06-25T16:27:50.645729484Z" level=info msg="CreateContainer within sandbox \"d8659c4a44a558274eb08a9c70dbd3d1cebce246f88abceb1f4d173cf22b124c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a7e0a976e07bb873c55af73382d324b82911d0d9a4a63292ba85d81b4a37ff11\"" Jun 25 16:27:50.646336 containerd[1325]: time="2024-06-25T16:27:50.646303951Z" level=info msg="StartContainer for \"a7e0a976e07bb873c55af73382d324b82911d0d9a4a63292ba85d81b4a37ff11\"" Jun 25 16:27:50.798167 containerd[1325]: time="2024-06-25T16:27:50.797908150Z" level=info msg="StartContainer for \"a7e0a976e07bb873c55af73382d324b82911d0d9a4a63292ba85d81b4a37ff11\" returns successfully" Jun 25 16:27:51.721661 kubelet[2373]: E0625 16:27:51.721571 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:52.712854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7e0a976e07bb873c55af73382d324b82911d0d9a4a63292ba85d81b4a37ff11-rootfs.mount: Deactivated successfully. Jun 25 16:27:52.727491 containerd[1325]: time="2024-06-25T16:27:52.726586264Z" level=info msg="shim disconnected" id=a7e0a976e07bb873c55af73382d324b82911d0d9a4a63292ba85d81b4a37ff11 namespace=k8s.io Jun 25 16:27:52.727491 containerd[1325]: time="2024-06-25T16:27:52.726670353Z" level=warning msg="cleaning up after shim disconnected" id=a7e0a976e07bb873c55af73382d324b82911d0d9a4a63292ba85d81b4a37ff11 namespace=k8s.io Jun 25 16:27:52.727491 containerd[1325]: time="2024-06-25T16:27:52.726691075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:27:52.742398 kubelet[2373]: I0625 16:27:52.742358 2373 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:27:52.779357 kubelet[2373]: I0625 16:27:52.779307 2373 topology_manager.go:215] "Topology Admit Handler" podUID="92239340-c0df-4316-9d55-1b079bb7d1ce" podNamespace="kube-system" podName="coredns-5dd5756b68-twgps" Jun 25 16:27:52.790176 kubelet[2373]: I0625 16:27:52.790114 2373 topology_manager.go:215] "Topology Admit Handler" podUID="aeaccc06-becd-4fb5-9a55-596dc08726e6" podNamespace="kube-system" podName="coredns-5dd5756b68-fd8lc" Jun 25 16:27:52.805187 kubelet[2373]: I0625 16:27:52.805161 2373 topology_manager.go:215] "Topology Admit Handler" podUID="4c83fa87-9e29-4b71-bd23-94783a019eb8" podNamespace="calico-system" podName="calico-kube-controllers-6c86b8c874-zq2rx" Jun 25 16:27:52.954134 containerd[1325]: time="2024-06-25T16:27:52.952757417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:27:52.958684 kubelet[2373]: I0625 16:27:52.958644 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv96t\" (UniqueName: \"kubernetes.io/projected/92239340-c0df-4316-9d55-1b079bb7d1ce-kube-api-access-nv96t\") pod \"coredns-5dd5756b68-twgps\" (UID: \"92239340-c0df-4316-9d55-1b079bb7d1ce\") " pod="kube-system/coredns-5dd5756b68-twgps" Jun 25 16:27:52.959243 kubelet[2373]: I0625 16:27:52.959194 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c83fa87-9e29-4b71-bd23-94783a019eb8-tigera-ca-bundle\") pod \"calico-kube-controllers-6c86b8c874-zq2rx\" (UID: \"4c83fa87-9e29-4b71-bd23-94783a019eb8\") " pod="calico-system/calico-kube-controllers-6c86b8c874-zq2rx" Jun 25 16:27:52.959642 kubelet[2373]: I0625 16:27:52.959578 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m8gf\" (UniqueName: \"kubernetes.io/projected/aeaccc06-becd-4fb5-9a55-596dc08726e6-kube-api-access-7m8gf\") pod \"coredns-5dd5756b68-fd8lc\" (UID: \"aeaccc06-becd-4fb5-9a55-596dc08726e6\") " pod="kube-system/coredns-5dd5756b68-fd8lc" Jun 25 16:27:52.959984 kubelet[2373]: I0625 16:27:52.959955 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92239340-c0df-4316-9d55-1b079bb7d1ce-config-volume\") pod \"coredns-5dd5756b68-twgps\" (UID: \"92239340-c0df-4316-9d55-1b079bb7d1ce\") " pod="kube-system/coredns-5dd5756b68-twgps" Jun 25 16:27:52.960341 kubelet[2373]: I0625 16:27:52.960317 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dswc\" (UniqueName: \"kubernetes.io/projected/4c83fa87-9e29-4b71-bd23-94783a019eb8-kube-api-access-5dswc\") pod \"calico-kube-controllers-6c86b8c874-zq2rx\" (UID: \"4c83fa87-9e29-4b71-bd23-94783a019eb8\") " pod="calico-system/calico-kube-controllers-6c86b8c874-zq2rx" Jun 25 16:27:52.960777 kubelet[2373]: I0625 16:27:52.960727 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeaccc06-becd-4fb5-9a55-596dc08726e6-config-volume\") pod \"coredns-5dd5756b68-fd8lc\" (UID: \"aeaccc06-becd-4fb5-9a55-596dc08726e6\") " pod="kube-system/coredns-5dd5756b68-fd8lc" Jun 25 16:27:53.115300 containerd[1325]: time="2024-06-25T16:27:53.114574375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c86b8c874-zq2rx,Uid:4c83fa87-9e29-4b71-bd23-94783a019eb8,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:53.130607 containerd[1325]: time="2024-06-25T16:27:53.130550131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fd8lc,Uid:aeaccc06-becd-4fb5-9a55-596dc08726e6,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:53.345781 containerd[1325]: time="2024-06-25T16:27:53.345709461Z" level=error msg="Failed to destroy network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.346313 containerd[1325]: time="2024-06-25T16:27:53.346281759Z" level=error msg="encountered an error cleaning up failed sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.349596 containerd[1325]: time="2024-06-25T16:27:53.348280872Z" level=error msg="Failed to destroy network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.349907 containerd[1325]: time="2024-06-25T16:27:53.349876641Z" level=error msg="encountered an error cleaning up failed sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.377653 containerd[1325]: time="2024-06-25T16:27:53.377465740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fd8lc,Uid:aeaccc06-becd-4fb5-9a55-596dc08726e6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.378198 kubelet[2373]: E0625 16:27:53.378062 2373 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.378198 kubelet[2373]: E0625 16:27:53.378135 2373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fd8lc" Jun 25 16:27:53.378198 kubelet[2373]: E0625 16:27:53.378162 2373 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fd8lc" Jun 25 16:27:53.379732 kubelet[2373]: E0625 16:27:53.378514 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-fd8lc_kube-system(aeaccc06-becd-4fb5-9a55-596dc08726e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-fd8lc_kube-system(aeaccc06-becd-4fb5-9a55-596dc08726e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fd8lc" podUID="aeaccc06-becd-4fb5-9a55-596dc08726e6" Jun 25 16:27:53.379732 kubelet[2373]: E0625 16:27:53.379644 2373 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.379732 kubelet[2373]: E0625 16:27:53.379710 2373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c86b8c874-zq2rx" Jun 25 16:27:53.380766 containerd[1325]: time="2024-06-25T16:27:53.379307395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c86b8c874-zq2rx,Uid:4c83fa87-9e29-4b71-bd23-94783a019eb8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.380886 kubelet[2373]: E0625 16:27:53.379735 2373 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c86b8c874-zq2rx" Jun 25 16:27:53.380886 kubelet[2373]: E0625 16:27:53.379797 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c86b8c874-zq2rx_calico-system(4c83fa87-9e29-4b71-bd23-94783a019eb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c86b8c874-zq2rx_calico-system(4c83fa87-9e29-4b71-bd23-94783a019eb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c86b8c874-zq2rx" podUID="4c83fa87-9e29-4b71-bd23-94783a019eb8" Jun 25 16:27:53.397680 containerd[1325]: time="2024-06-25T16:27:53.397640340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-twgps,Uid:92239340-c0df-4316-9d55-1b079bb7d1ce,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:53.485402 containerd[1325]: time="2024-06-25T16:27:53.485309975Z" level=error msg="Failed to destroy network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.486262 containerd[1325]: time="2024-06-25T16:27:53.486181257Z" level=error msg="encountered an error cleaning up failed sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.486982 containerd[1325]: time="2024-06-25T16:27:53.486464831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-twgps,Uid:92239340-c0df-4316-9d55-1b079bb7d1ce,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.487651 kubelet[2373]: E0625 16:27:53.486789 2373 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.487651 kubelet[2373]: E0625 16:27:53.486858 2373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-twgps" Jun 25 16:27:53.487651 kubelet[2373]: E0625 16:27:53.486883 2373 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-twgps" Jun 25 16:27:53.487892 kubelet[2373]: E0625 16:27:53.486947 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-twgps_kube-system(92239340-c0df-4316-9d55-1b079bb7d1ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-twgps_kube-system(92239340-c0df-4316-9d55-1b079bb7d1ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-twgps" podUID="92239340-c0df-4316-9d55-1b079bb7d1ce" Jun 25 16:27:53.718190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1-shm.mount: Deactivated successfully. Jun 25 16:27:53.729747 containerd[1325]: time="2024-06-25T16:27:53.729614150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtzcb,Uid:6bca2029-8200-41b2-a394-21e05e7bb1ca,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:53.884092 containerd[1325]: time="2024-06-25T16:27:53.883970263Z" level=error msg="Failed to destroy network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.886911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375-shm.mount: Deactivated successfully. Jun 25 16:27:53.888286 containerd[1325]: time="2024-06-25T16:27:53.887351633Z" level=error msg="encountered an error cleaning up failed sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.888286 containerd[1325]: time="2024-06-25T16:27:53.887438720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtzcb,Uid:6bca2029-8200-41b2-a394-21e05e7bb1ca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.888861 kubelet[2373]: E0625 16:27:53.887706 2373 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:53.888861 kubelet[2373]: E0625 16:27:53.887792 2373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtzcb" Jun 25 16:27:53.888861 kubelet[2373]: E0625 16:27:53.887844 2373 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtzcb" Jun 25 16:27:53.889276 kubelet[2373]: E0625 16:27:53.887907 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xtzcb_calico-system(6bca2029-8200-41b2-a394-21e05e7bb1ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xtzcb_calico-system(6bca2029-8200-41b2-a394-21e05e7bb1ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:53.951933 kubelet[2373]: I0625 16:27:53.951878 2373 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:27:53.953567 containerd[1325]: time="2024-06-25T16:27:53.953214705Z" level=info msg="StopPodSandbox for \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\"" Jun 25 16:27:53.954608 kubelet[2373]: I0625 16:27:53.954363 2373 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:27:53.955371 containerd[1325]: time="2024-06-25T16:27:53.954931629Z" level=info msg="StopPodSandbox for \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\"" Jun 25 16:27:53.966613 containerd[1325]: time="2024-06-25T16:27:53.965609268Z" level=info msg="Ensure that sandbox 55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3 in task-service has been cleanup successfully" Jun 25 16:27:53.967057 containerd[1325]: time="2024-06-25T16:27:53.966978368Z" level=info msg="Ensure that sandbox 5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1 in task-service has been cleanup successfully" Jun 25 16:27:53.970048 kubelet[2373]: I0625 16:27:53.968161 2373 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:27:53.970424 containerd[1325]: time="2024-06-25T16:27:53.970381812Z" level=info msg="StopPodSandbox for \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\"" Jun 25 16:27:53.970785 containerd[1325]: time="2024-06-25T16:27:53.970766451Z" level=info msg="Ensure that sandbox a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375 in task-service has been cleanup successfully" Jun 25 16:27:53.977534 kubelet[2373]: I0625 16:27:53.977483 2373 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:27:53.978630 containerd[1325]: time="2024-06-25T16:27:53.978595648Z" level=info msg="StopPodSandbox for \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\"" Jun 25 16:27:53.979119 containerd[1325]: time="2024-06-25T16:27:53.979099116Z" level=info msg="Ensure that sandbox 5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142 in task-service has been cleanup successfully" Jun 25 16:27:54.065410 containerd[1325]: time="2024-06-25T16:27:54.065351031Z" level=error msg="StopPodSandbox for \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\" failed" error="failed to destroy network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:54.066445 kubelet[2373]: E0625 16:27:54.066220 2373 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:27:54.066445 kubelet[2373]: E0625 16:27:54.066309 2373 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1"} Jun 25 16:27:54.066445 kubelet[2373]: E0625 16:27:54.066374 2373 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c83fa87-9e29-4b71-bd23-94783a019eb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:54.066445 kubelet[2373]: E0625 16:27:54.066425 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c83fa87-9e29-4b71-bd23-94783a019eb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c86b8c874-zq2rx" podUID="4c83fa87-9e29-4b71-bd23-94783a019eb8" Jun 25 16:27:54.069452 containerd[1325]: time="2024-06-25T16:27:54.069407020Z" level=error msg="StopPodSandbox for \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\" failed" error="failed to destroy network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:54.069889 kubelet[2373]: E0625 16:27:54.069724 2373 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:27:54.069889 kubelet[2373]: E0625 16:27:54.069755 2373 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3"} Jun 25 16:27:54.069889 kubelet[2373]: E0625 16:27:54.069824 2373 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92239340-c0df-4316-9d55-1b079bb7d1ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:54.069889 kubelet[2373]: E0625 16:27:54.069873 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92239340-c0df-4316-9d55-1b079bb7d1ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-twgps" podUID="92239340-c0df-4316-9d55-1b079bb7d1ce" Jun 25 16:27:54.070947 containerd[1325]: time="2024-06-25T16:27:54.070915665Z" level=error msg="StopPodSandbox for \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\" failed" error="failed to destroy network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:54.071300 containerd[1325]: time="2024-06-25T16:27:54.071188736Z" level=error msg="StopPodSandbox for \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\" failed" error="failed to destroy network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:54.071674 kubelet[2373]: E0625 16:27:54.071537 2373 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:27:54.071674 kubelet[2373]: E0625 16:27:54.071567 2373 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375"} Jun 25 16:27:54.071674 kubelet[2373]: E0625 16:27:54.071620 2373 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6bca2029-8200-41b2-a394-21e05e7bb1ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:54.071674 kubelet[2373]: E0625 16:27:54.071657 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6bca2029-8200-41b2-a394-21e05e7bb1ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:27:54.072197 kubelet[2373]: E0625 16:27:54.072166 2373 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:27:54.072282 kubelet[2373]: E0625 16:27:54.072221 2373 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142"} Jun 25 16:27:54.072282 kubelet[2373]: E0625 16:27:54.072271 2373 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aeaccc06-becd-4fb5-9a55-596dc08726e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:54.072382 kubelet[2373]: E0625 16:27:54.072308 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aeaccc06-becd-4fb5-9a55-596dc08726e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fd8lc" podUID="aeaccc06-becd-4fb5-9a55-596dc08726e6" Jun 25 16:28:06.722421 containerd[1325]: time="2024-06-25T16:28:06.722347669Z" level=info msg="StopPodSandbox for \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\"" Jun 25 16:28:06.723334 containerd[1325]: time="2024-06-25T16:28:06.723313189Z" level=info msg="StopPodSandbox for \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\"" Jun 25 16:28:06.727485 containerd[1325]: time="2024-06-25T16:28:06.727442952Z" level=info msg="StopPodSandbox for \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\"" Jun 25 16:28:06.816356 containerd[1325]: time="2024-06-25T16:28:06.816277637Z" level=error msg="StopPodSandbox for \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\" failed" error="failed to destroy network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:06.817020 kubelet[2373]: E0625 16:28:06.816847 2373 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:28:06.817020 kubelet[2373]: E0625 16:28:06.816907 2373 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1"} Jun 25 16:28:06.817565 kubelet[2373]: E0625 16:28:06.817487 2373 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c83fa87-9e29-4b71-bd23-94783a019eb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:06.817565 kubelet[2373]: E0625 16:28:06.817541 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c83fa87-9e29-4b71-bd23-94783a019eb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c86b8c874-zq2rx" podUID="4c83fa87-9e29-4b71-bd23-94783a019eb8" Jun 25 16:28:06.831724 containerd[1325]: time="2024-06-25T16:28:06.831658234Z" level=error msg="StopPodSandbox for \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\" failed" error="failed to destroy network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:06.832260 kubelet[2373]: E0625 16:28:06.832225 2373 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:28:06.832329 kubelet[2373]: E0625 16:28:06.832296 2373 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3"} Jun 25 16:28:06.832378 kubelet[2373]: E0625 16:28:06.832363 2373 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92239340-c0df-4316-9d55-1b079bb7d1ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:06.832487 kubelet[2373]: E0625 16:28:06.832402 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92239340-c0df-4316-9d55-1b079bb7d1ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-twgps" podUID="92239340-c0df-4316-9d55-1b079bb7d1ce" Jun 25 16:28:06.844408 containerd[1325]: time="2024-06-25T16:28:06.844332755Z" level=error msg="StopPodSandbox for \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\" failed" error="failed to destroy network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:06.844777 kubelet[2373]: E0625 16:28:06.844720 2373 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:28:06.844852 kubelet[2373]: E0625 16:28:06.844791 2373 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142"} Jun 25 16:28:06.845216 kubelet[2373]: E0625 16:28:06.844861 2373 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aeaccc06-becd-4fb5-9a55-596dc08726e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:06.845216 kubelet[2373]: E0625 16:28:06.844933 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aeaccc06-becd-4fb5-9a55-596dc08726e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fd8lc" podUID="aeaccc06-becd-4fb5-9a55-596dc08726e6" Jun 25 16:28:09.724478 containerd[1325]: time="2024-06-25T16:28:09.724286503Z" level=info msg="StopPodSandbox for \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\"" Jun 25 16:28:09.767892 containerd[1325]: time="2024-06-25T16:28:09.767760012Z" level=error msg="StopPodSandbox for \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\" failed" error="failed to destroy network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:09.768311 kubelet[2373]: E0625 16:28:09.768178 2373 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:28:09.768311 kubelet[2373]: E0625 16:28:09.768238 2373 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375"} Jun 25 16:28:09.768311 kubelet[2373]: E0625 16:28:09.768310 2373 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6bca2029-8200-41b2-a394-21e05e7bb1ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:09.769707 kubelet[2373]: E0625 16:28:09.768370 2373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6bca2029-8200-41b2-a394-21e05e7bb1ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xtzcb" podUID="6bca2029-8200-41b2-a394-21e05e7bb1ca" Jun 25 16:28:13.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.182:22-172.24.4.1:45488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:13.912213 systemd[1]: Started sshd@7-172.24.4.182:22-172.24.4.1:45488.service - OpenSSH per-connection server daemon (172.24.4.1:45488). Jun 25 16:28:13.920161 kernel: audit: type=1130 audit(1719332893.911:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.182:22-172.24.4.1:45488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:15.138000 audit[3351]: USER_ACCT pid=3351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:15.139662 sshd[3351]: Accepted publickey for core from 172.24.4.1 port 45488 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:28:15.141000 audit[3351]: CRED_ACQ pid=3351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:15.144119 sshd[3351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:15.145929 kernel: audit: type=1101 audit(1719332895.138:286): pid=3351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:15.146056 kernel: audit: type=1103 audit(1719332895.141:287): pid=3351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:15.160712 kernel: audit: type=1006 audit(1719332895.141:288): pid=3351 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jun 25 16:28:15.160857 kernel: audit: type=1300 audit(1719332895.141:288): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc437b7500 a2=3 a3=7fb47b22c480 items=0 ppid=1 pid=3351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:15.141000 audit[3351]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc437b7500 a2=3 a3=7fb47b22c480 items=0 ppid=1 pid=3351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:15.141000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:15.165648 kernel: audit: type=1327 audit(1719332895.141:288): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:15.166355 systemd-logind[1302]: New session 8 of user core. Jun 25 16:28:15.168312 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:28:15.175000 audit[3351]: USER_START pid=3351 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:15.184355 kernel: audit: type=1105 audit(1719332895.175:289): pid=3351 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:15.184000 audit[3354]: CRED_ACQ pid=3354 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:15.191945 kernel: audit: type=1103 audit(1719332895.184:290): pid=3354 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:16.202478 sshd[3351]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:16.203000 audit[3351]: USER_END pid=3351 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:16.205904 systemd[1]: sshd@7-172.24.4.182:22-172.24.4.1:45488.service: Deactivated successfully. Jun 25 16:28:16.206855 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:28:16.209037 kernel: audit: type=1106 audit(1719332896.203:291): pid=3351 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:16.329836 kernel: audit: type=1104 audit(1719332896.203:292): pid=3351 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:16.203000 audit[3351]: CRED_DISP pid=3351 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:16.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.182:22-172.24.4.1:45488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:16.210604 systemd-logind[1302]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:28:16.211607 systemd-logind[1302]: Removed session 8. Jun 25 16:28:16.931438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578917139.mount: Deactivated successfully. Jun 25 16:28:17.019130 containerd[1325]: time="2024-06-25T16:28:17.018505246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:28:17.020211 containerd[1325]: time="2024-06-25T16:28:17.011797051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:17.031908 containerd[1325]: time="2024-06-25T16:28:17.030951790Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:17.034276 containerd[1325]: time="2024-06-25T16:28:17.034232500Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:17.036327 containerd[1325]: time="2024-06-25T16:28:17.036293917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:17.037266 containerd[1325]: time="2024-06-25T16:28:17.037232390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 24.084401976s" Jun 25 16:28:17.037418 containerd[1325]: time="2024-06-25T16:28:17.037382213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:28:17.075691 containerd[1325]: time="2024-06-25T16:28:17.075632806Z" level=info msg="CreateContainer within sandbox \"d8659c4a44a558274eb08a9c70dbd3d1cebce246f88abceb1f4d173cf22b124c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:28:17.112484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380495467.mount: Deactivated successfully. Jun 25 16:28:17.133233 containerd[1325]: time="2024-06-25T16:28:17.133190757Z" level=info msg="CreateContainer within sandbox \"d8659c4a44a558274eb08a9c70dbd3d1cebce246f88abceb1f4d173cf22b124c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a99be7bb007731ab8e889dc0471684535fa29ace979dacc8415a6a4c0cd60a58\"" Jun 25 16:28:17.134866 containerd[1325]: time="2024-06-25T16:28:17.134413567Z" level=info msg="StartContainer for \"a99be7bb007731ab8e889dc0471684535fa29ace979dacc8415a6a4c0cd60a58\"" Jun 25 16:28:17.483603 containerd[1325]: time="2024-06-25T16:28:17.483515185Z" level=info msg="StartContainer for \"a99be7bb007731ab8e889dc0471684535fa29ace979dacc8415a6a4c0cd60a58\" returns successfully" Jun 25 16:28:17.907911 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:28:17.908500 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:28:18.222241 kubelet[2373]: I0625 16:28:18.221513 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-mf4w5" podStartSLOduration=1.8488385809999999 podCreationTimestamp="2024-06-25 16:27:30 +0000 UTC" firstStartedPulling="2024-06-25 16:27:30.665106547 +0000 UTC m=+19.208140662" lastFinishedPulling="2024-06-25 16:28:17.03772365 +0000 UTC m=+65.580757755" observedRunningTime="2024-06-25 16:28:18.220632427 +0000 UTC m=+66.763666532" watchObservedRunningTime="2024-06-25 16:28:18.221455674 +0000 UTC m=+66.764489789" Jun 25 16:28:18.258855 systemd[1]: run-containerd-runc-k8s.io-a99be7bb007731ab8e889dc0471684535fa29ace979dacc8415a6a4c0cd60a58-runc.0gCOUl.mount: Deactivated successfully. Jun 25 16:28:18.721743 containerd[1325]: time="2024-06-25T16:28:18.721633907Z" level=info msg="StopPodSandbox for \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\"" Jun 25 16:28:19.146160 systemd[1]: run-containerd-runc-k8s.io-a99be7bb007731ab8e889dc0471684535fa29ace979dacc8415a6a4c0cd60a58-runc.XUQJni.mount: Deactivated successfully. Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:18.875 [INFO][3463] k8s.go 608: Cleaning up netns ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:18.877 [INFO][3463] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" iface="eth0" netns="/var/run/netns/cni-2b8596e7-d2c2-0d95-6081-aba6976c2799" Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:18.878 [INFO][3463] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" iface="eth0" netns="/var/run/netns/cni-2b8596e7-d2c2-0d95-6081-aba6976c2799" Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:18.879 [INFO][3463] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" iface="eth0" netns="/var/run/netns/cni-2b8596e7-d2c2-0d95-6081-aba6976c2799" Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:18.879 [INFO][3463] k8s.go 615: Releasing IP address(es) ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:18.879 [INFO][3463] utils.go 188: Calico CNI releasing IP address ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:19.314 [INFO][3470] ipam_plugin.go 411: Releasing address using handleID ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" HandleID="k8s-pod-network.5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:19.318 [INFO][3470] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:19.318 [INFO][3470] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:19.336 [WARNING][3470] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" HandleID="k8s-pod-network.5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:19.336 [INFO][3470] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" HandleID="k8s-pod-network.5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:19.338 [INFO][3470] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:19.342018 containerd[1325]: 2024-06-25 16:28:19.340 [INFO][3463] k8s.go 621: Teardown processing complete. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:28:19.347191 containerd[1325]: time="2024-06-25T16:28:19.346399675Z" level=info msg="TearDown network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\" successfully" Jun 25 16:28:19.347191 containerd[1325]: time="2024-06-25T16:28:19.346508847Z" level=info msg="StopPodSandbox for \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\" returns successfully" Jun 25 16:28:19.345273 systemd[1]: run-netns-cni\x2d2b8596e7\x2dd2c2\x2d0d95\x2d6081\x2daba6976c2799.mount: Deactivated successfully. Jun 25 16:28:19.347566 containerd[1325]: time="2024-06-25T16:28:19.347492405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fd8lc,Uid:aeaccc06-becd-4fb5-9a55-596dc08726e6,Namespace:kube-system,Attempt:1,}" Jun 25 16:28:19.500000 audit[3536]: AVC avc: denied { write } for pid=3536 comm="tee" name="fd" dev="proc" ino=27383 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:28:19.503225 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:19.503360 kernel: audit: type=1400 audit(1719332899.500:294): avc: denied { write } for pid=3536 comm="tee" name="fd" dev="proc" ino=27383 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:28:19.510156 kernel: audit: type=1300 audit(1719332899.500:294): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcf41109f4 a2=241 a3=1b6 items=1 ppid=3531 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:19.500000 audit[3536]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcf41109f4 a2=241 a3=1b6 items=1 ppid=3531 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:19.500000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:28:19.515087 kernel: audit: type=1307 audit(1719332899.500:294): cwd="/etc/service/enabled/bird6/log" Jun 25 16:28:19.500000 audit: PATH item=0 name="/dev/fd/63" inode=27364 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:28:19.519078 kernel: audit: type=1302 audit(1719332899.500:294): item=0 name="/dev/fd/63" inode=27364 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:28:19.500000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:28:19.529809 kernel: audit: type=1327 audit(1719332899.500:294): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:28:19.526000 audit[3555]: AVC avc: denied { write } for pid=3555 comm="tee" name="fd" dev="proc" ino=27394 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:28:19.537415 kernel: audit: type=1400 audit(1719332899.526:295): avc: denied { write } for pid=3555 comm="tee" name="fd" dev="proc" ino=27394 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:28:19.551468 kernel: audit: type=1300 audit(1719332899.526:295): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffccf06d9f5 a2=241 a3=1b6 items=1 ppid=3538 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:19.526000 audit[3555]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffccf06d9f5 a2=241 a3=1b6 items=1 ppid=3538 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:19.557379 kernel: audit: type=1307 audit(1719332899.526:295): cwd="/etc/service/enabled/bird/log" Jun 25 16:28:19.526000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:28:19.561252 kernel: audit: type=1302 audit(1719332899.526:295): item=0 name="/dev/fd/63" inode=27391 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:28:19.526000 audit: PATH item=0 name="/dev/fd/63" inode=27391 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:28:19.526000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:28:19.567108 kernel: audit: type=1327 audit(1719332899.526:295): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:28:19.547000 audit[3560]: AVC avc: denied { write } for pid=3560 comm="tee" name="fd" dev="proc" ino=27413 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:28:19.547000 audit[3560]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffd87d29e5 a2=241 a3=1b6 items=1 ppid=3546 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:19.547000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:28:19.547000 audit: PATH item=0 name="/dev/fd/63" inode=27404 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:28:19.547000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:28:19.610000 audit[3576]: AVC avc: denied { write } for pid=3576 comm="tee" name="fd" dev="proc" ino=27440 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:28:19.610000 audit[3576]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff10ea49e4 a2=241 a3=1b6 items=1 ppid=3524 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:19.610000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:28:19.610000 audit: PATH item=0 name="/dev/fd/63" inode=27433 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:28:19.610000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:28:19.633000 audit[3597]: AVC avc: denied { write } for pid=3597 comm="tee" name="fd" dev="proc" ino=26548 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:28:19.633000 audit[3597]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde9ea99f4 a2=241 a3=1b6 items=1 ppid=3542 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:19.633000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:28:19.633000 audit: PATH item=0 name="/dev/fd/63" inode=26545 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:28:19.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:28:19.635000 audit[3587]: AVC avc: denied { write } for pid=3587 comm="tee" name="fd" dev="proc" ino=26552 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:28:19.635000 audit[3587]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe649b9f4 a2=241 a3=1b6 items=1 ppid=3544 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:19.635000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:28:19.635000 audit: PATH item=0 name="/dev/fd/63" inode=26544 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:28:19.635000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:28:19.639000 audit[3579]: AVC avc: denied { write } for pid=3579 comm="tee" name="fd" dev="proc" ino=26556 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:28:19.639000 audit[3579]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd03af99f6 a2=241 a3=1b6 items=1 ppid=3522 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:19.639000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:28:19.639000 audit: PATH item=0 name="/dev/fd/63" inode=26543 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:28:19.639000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:28:19.656619 systemd-networkd[1092]: cali54a7e45f219: Link UP Jun 25 16:28:19.659720 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:28:19.659787 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali54a7e45f219: link becomes ready Jun 25 16:28:19.660489 systemd-networkd[1092]: cali54a7e45f219: Gained carrier Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.426 [INFO][3500] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.440 [INFO][3500] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0 coredns-5dd5756b68- kube-system aeaccc06-becd-4fb5-9a55-596dc08726e6 805 0 2024-06-25 16:27:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815-2-4-3-54e11b9a94.novalocal coredns-5dd5756b68-fd8lc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali54a7e45f219 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Namespace="kube-system" Pod="coredns-5dd5756b68-fd8lc" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.440 [INFO][3500] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Namespace="kube-system" Pod="coredns-5dd5756b68-fd8lc" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.554 [INFO][3513] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" HandleID="k8s-pod-network.087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.591 [INFO][3513] ipam_plugin.go 264: Auto assigning IP ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" HandleID="k8s-pod-network.087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310370), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815-2-4-3-54e11b9a94.novalocal", "pod":"coredns-5dd5756b68-fd8lc", "timestamp":"2024-06-25 16:28:19.554374783 +0000 UTC"}, Hostname:"ci-3815-2-4-3-54e11b9a94.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.591 [INFO][3513] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.591 [INFO][3513] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.591 [INFO][3513] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-3-54e11b9a94.novalocal' Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.594 [INFO][3513] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.608 [INFO][3513] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.612 [INFO][3513] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.614 [INFO][3513] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.616 [INFO][3513] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.616 [INFO][3513] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.619 [INFO][3513] ipam.go 1685: Creating new handle: k8s-pod-network.087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32 Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.623 [INFO][3513] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.629 [INFO][3513] ipam.go 1216: Successfully claimed IPs: [192.168.32.129/26] block=192.168.32.128/26 handle="k8s-pod-network.087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.629 [INFO][3513] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.129/26] handle="k8s-pod-network.087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.629 [INFO][3513] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:19.707828 containerd[1325]: 2024-06-25 16:28:19.629 [INFO][3513] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.129/26] IPv6=[] ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" HandleID="k8s-pod-network.087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.708692 containerd[1325]: 2024-06-25 16:28:19.631 [INFO][3500] k8s.go 386: Populated endpoint ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Namespace="kube-system" Pod="coredns-5dd5756b68-fd8lc" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aeaccc06-becd-4fb5-9a55-596dc08726e6", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"", Pod:"coredns-5dd5756b68-fd8lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54a7e45f219", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:19.708692 containerd[1325]: 2024-06-25 16:28:19.631 [INFO][3500] k8s.go 387: Calico CNI using IPs: [192.168.32.129/32] ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Namespace="kube-system" Pod="coredns-5dd5756b68-fd8lc" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.708692 containerd[1325]: 2024-06-25 16:28:19.632 [INFO][3500] dataplane_linux.go 68: Setting the host side veth name to cali54a7e45f219 ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Namespace="kube-system" Pod="coredns-5dd5756b68-fd8lc" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.708692 containerd[1325]: 2024-06-25 16:28:19.664 [INFO][3500] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Namespace="kube-system" Pod="coredns-5dd5756b68-fd8lc" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.708692 containerd[1325]: 2024-06-25 16:28:19.665 [INFO][3500] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Namespace="kube-system" Pod="coredns-5dd5756b68-fd8lc" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aeaccc06-becd-4fb5-9a55-596dc08726e6", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32", Pod:"coredns-5dd5756b68-fd8lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54a7e45f219", MAC:"b2:2f:18:ee:ee:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:19.708692 containerd[1325]: 2024-06-25 16:28:19.689 [INFO][3500] k8s.go 500: Wrote updated endpoint to datastore ContainerID="087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32" Namespace="kube-system" Pod="coredns-5dd5756b68-fd8lc" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:28:19.723190 containerd[1325]: time="2024-06-25T16:28:19.723130402Z" level=info msg="StopPodSandbox for \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\"" Jun 25 16:28:19.914091 containerd[1325]: time="2024-06-25T16:28:19.912149079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:19.914091 containerd[1325]: time="2024-06-25T16:28:19.912213184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:19.914091 containerd[1325]: time="2024-06-25T16:28:19.912232030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:19.914091 containerd[1325]: time="2024-06-25T16:28:19.912247070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:19.919 [INFO][3629] k8s.go 608: Cleaning up netns ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:19.919 [INFO][3629] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" iface="eth0" netns="/var/run/netns/cni-68099cf6-e11d-5c3d-aeb5-cb9430efab44" Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:19.920 [INFO][3629] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" iface="eth0" netns="/var/run/netns/cni-68099cf6-e11d-5c3d-aeb5-cb9430efab44" Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:19.923 [INFO][3629] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" iface="eth0" netns="/var/run/netns/cni-68099cf6-e11d-5c3d-aeb5-cb9430efab44" Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:19.923 [INFO][3629] k8s.go 615: Releasing IP address(es) ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:19.923 [INFO][3629] utils.go 188: Calico CNI releasing IP address ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:20.034 [INFO][3658] ipam_plugin.go 411: Releasing address using handleID ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" HandleID="k8s-pod-network.5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:20.034 [INFO][3658] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:20.034 [INFO][3658] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:20.042 [WARNING][3658] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" HandleID="k8s-pod-network.5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:20.042 [INFO][3658] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" HandleID="k8s-pod-network.5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:20.048 [INFO][3658] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:20.054141 containerd[1325]: 2024-06-25 16:28:20.051 [INFO][3629] k8s.go 621: Teardown processing complete. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:28:20.054937 containerd[1325]: time="2024-06-25T16:28:20.054891000Z" level=info msg="TearDown network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\" successfully" Jun 25 16:28:20.055049 containerd[1325]: time="2024-06-25T16:28:20.055026605Z" level=info msg="StopPodSandbox for \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\" returns successfully" Jun 25 16:28:20.056012 containerd[1325]: time="2024-06-25T16:28:20.055916247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c86b8c874-zq2rx,Uid:4c83fa87-9e29-4b71-bd23-94783a019eb8,Namespace:calico-system,Attempt:1,}" Jun 25 16:28:20.115283 containerd[1325]: time="2024-06-25T16:28:20.115237306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fd8lc,Uid:aeaccc06-becd-4fb5-9a55-596dc08726e6,Namespace:kube-system,Attempt:1,} returns sandbox id \"087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32\"" Jun 25 16:28:20.153371 systemd[1]: run-netns-cni\x2d68099cf6\x2de11d\x2d5c3d\x2daeb5\x2dcb9430efab44.mount: Deactivated successfully. Jun 25 16:28:20.233195 containerd[1325]: time="2024-06-25T16:28:20.233071678Z" level=info msg="CreateContainer within sandbox \"087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:28:20.292848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987998916.mount: Deactivated successfully. Jun 25 16:28:20.322860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728271781.mount: Deactivated successfully. Jun 25 16:28:20.335023 containerd[1325]: time="2024-06-25T16:28:20.334933727Z" level=info msg="CreateContainer within sandbox \"087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"444b8388e0ed5dc061fd525ad8819dbbeacec14845be9e9372194730a6403ad8\"" Jun 25 16:28:20.336033 containerd[1325]: time="2024-06-25T16:28:20.336007669Z" level=info msg="StartContainer for \"444b8388e0ed5dc061fd525ad8819dbbeacec14845be9e9372194730a6403ad8\"" Jun 25 16:28:20.356358 systemd-networkd[1092]: cali62adc4ef477: Link UP Jun 25 16:28:20.359125 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali62adc4ef477: link becomes ready Jun 25 16:28:20.359421 systemd-networkd[1092]: cali62adc4ef477: Gained carrier Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.170 [INFO][3706] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.208 [INFO][3706] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0 calico-kube-controllers-6c86b8c874- calico-system 4c83fa87-9e29-4b71-bd23-94783a019eb8 819 0 2024-06-25 16:27:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c86b8c874 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3815-2-4-3-54e11b9a94.novalocal calico-kube-controllers-6c86b8c874-zq2rx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali62adc4ef477 [] []}} ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Namespace="calico-system" Pod="calico-kube-controllers-6c86b8c874-zq2rx" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.208 [INFO][3706] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Namespace="calico-system" Pod="calico-kube-controllers-6c86b8c874-zq2rx" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.255 [INFO][3725] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" HandleID="k8s-pod-network.0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.271 [INFO][3725] ipam_plugin.go 264: Auto assigning IP ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" HandleID="k8s-pod-network.0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815-2-4-3-54e11b9a94.novalocal", "pod":"calico-kube-controllers-6c86b8c874-zq2rx", "timestamp":"2024-06-25 16:28:20.255741479 +0000 UTC"}, Hostname:"ci-3815-2-4-3-54e11b9a94.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.274 [INFO][3725] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.274 [INFO][3725] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.274 [INFO][3725] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-3-54e11b9a94.novalocal' Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.278 [INFO][3725] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.293 [INFO][3725] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.309 [INFO][3725] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.317 [INFO][3725] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.321 [INFO][3725] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.321 [INFO][3725] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.333 [INFO][3725] ipam.go 1685: Creating new handle: k8s-pod-network.0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.343 [INFO][3725] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.350 [INFO][3725] ipam.go 1216: Successfully claimed IPs: [192.168.32.130/26] block=192.168.32.128/26 handle="k8s-pod-network.0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.350 [INFO][3725] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.130/26] handle="k8s-pod-network.0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.350 [INFO][3725] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:20.381222 containerd[1325]: 2024-06-25 16:28:20.350 [INFO][3725] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.130/26] IPv6=[] ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" HandleID="k8s-pod-network.0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.382048 containerd[1325]: 2024-06-25 16:28:20.353 [INFO][3706] k8s.go 386: Populated endpoint ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Namespace="calico-system" Pod="calico-kube-controllers-6c86b8c874-zq2rx" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0", GenerateName:"calico-kube-controllers-6c86b8c874-", Namespace:"calico-system", SelfLink:"", UID:"4c83fa87-9e29-4b71-bd23-94783a019eb8", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c86b8c874", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"", Pod:"calico-kube-controllers-6c86b8c874-zq2rx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62adc4ef477", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:20.382048 containerd[1325]: 2024-06-25 16:28:20.353 [INFO][3706] k8s.go 387: Calico CNI using IPs: [192.168.32.130/32] ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Namespace="calico-system" Pod="calico-kube-controllers-6c86b8c874-zq2rx" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.382048 containerd[1325]: 2024-06-25 16:28:20.353 [INFO][3706] dataplane_linux.go 68: Setting the host side veth name to cali62adc4ef477 ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Namespace="calico-system" Pod="calico-kube-controllers-6c86b8c874-zq2rx" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.382048 containerd[1325]: 2024-06-25 16:28:20.360 [INFO][3706] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Namespace="calico-system" Pod="calico-kube-controllers-6c86b8c874-zq2rx" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.382048 containerd[1325]: 2024-06-25 16:28:20.361 [INFO][3706] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Namespace="calico-system" Pod="calico-kube-controllers-6c86b8c874-zq2rx" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0", GenerateName:"calico-kube-controllers-6c86b8c874-", Namespace:"calico-system", SelfLink:"", UID:"4c83fa87-9e29-4b71-bd23-94783a019eb8", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c86b8c874", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f", Pod:"calico-kube-controllers-6c86b8c874-zq2rx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62adc4ef477", MAC:"92:f3:1b:94:8e:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:20.382048 containerd[1325]: 2024-06-25 16:28:20.375 [INFO][3706] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f" Namespace="calico-system" Pod="calico-kube-controllers-6c86b8c874-zq2rx" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:28:20.475186 containerd[1325]: time="2024-06-25T16:28:20.474916855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:20.475186 containerd[1325]: time="2024-06-25T16:28:20.475006531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:20.475186 containerd[1325]: time="2024-06-25T16:28:20.475031700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:20.475186 containerd[1325]: time="2024-06-25T16:28:20.475049624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:20.488028 systemd-networkd[1092]: vxlan.calico: Link UP Jun 25 16:28:20.488043 systemd-networkd[1092]: vxlan.calico: Gained carrier Jun 25 16:28:20.526000 audit: BPF prog-id=10 op=LOAD Jun 25 16:28:20.526000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff28533c60 a2=70 a3=7f62fc765000 items=0 ppid=3548 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.526000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:28:20.526000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:28:20.526000 audit: BPF prog-id=11 op=LOAD Jun 25 16:28:20.526000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff28533c60 a2=70 a3=6f items=0 ppid=3548 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.526000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:28:20.526000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:28:20.526000 audit: BPF prog-id=12 op=LOAD Jun 25 16:28:20.526000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff28533bf0 a2=70 a3=7fff28533c60 items=0 ppid=3548 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.526000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:28:20.526000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:28:20.526000 audit: BPF prog-id=13 op=LOAD Jun 25 16:28:20.526000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff28533c20 a2=70 a3=0 items=0 ppid=3548 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.526000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:28:20.531579 containerd[1325]: time="2024-06-25T16:28:20.531376630Z" level=info msg="StartContainer for \"444b8388e0ed5dc061fd525ad8819dbbeacec14845be9e9372194730a6403ad8\" returns successfully" Jun 25 16:28:20.555000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:28:20.616193 containerd[1325]: time="2024-06-25T16:28:20.616137646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c86b8c874-zq2rx,Uid:4c83fa87-9e29-4b71-bd23-94783a019eb8,Namespace:calico-system,Attempt:1,} returns sandbox id \"0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f\"" Jun 25 16:28:20.639968 containerd[1325]: time="2024-06-25T16:28:20.639905536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:28:20.666000 audit[3886]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:28:20.666000 audit[3886]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffcea477c0 a2=0 a3=7fffcea477ac items=0 ppid=3548 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.666000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:28:20.682000 audit[3884]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:28:20.682000 audit[3884]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd0ed78320 a2=0 a3=7ffd0ed7830c items=0 ppid=3548 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.682000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:28:20.685000 audit[3885]: NETFILTER_CFG table=raw:99 family=2 entries=19 op=nft_register_chain pid=3885 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:28:20.685000 audit[3885]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7fff79d5d590 a2=0 a3=7fff79d5d57c items=0 ppid=3548 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.685000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:28:20.687000 audit[3888]: NETFILTER_CFG table=filter:100 family=2 entries=99 op=nft_register_chain pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:28:20.687000 audit[3888]: SYSCALL arch=c000003e syscall=46 success=yes exit=53840 a0=3 a1=7fff61b95710 a2=0 a3=7fff61b956fc items=0 ppid=3548 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.687000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:28:21.217940 systemd[1]: Started sshd@8-172.24.4.182:22-172.24.4.1:48732.service - OpenSSH per-connection server daemon (172.24.4.1:48732). Jun 25 16:28:21.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.182:22-172.24.4.1:48732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:21.248492 kubelet[2373]: I0625 16:28:21.248415 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fd8lc" podStartSLOduration=58.248324468 podCreationTimestamp="2024-06-25 16:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:21.247482308 +0000 UTC m=+69.790516493" watchObservedRunningTime="2024-06-25 16:28:21.248324468 +0000 UTC m=+69.791358623" Jun 25 16:28:21.307000 audit[3902]: NETFILTER_CFG table=filter:101 family=2 entries=14 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:21.307000 audit[3902]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe96163c30 a2=0 a3=7ffe96163c1c items=0 ppid=2548 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:21.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:21.321000 audit[3902]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:21.321000 audit[3902]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe96163c30 a2=0 a3=0 items=0 ppid=2548 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:21.321000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:21.610367 systemd-networkd[1092]: cali54a7e45f219: Gained IPv6LL Jun 25 16:28:21.752713 containerd[1325]: time="2024-06-25T16:28:21.723792004Z" level=info msg="StopPodSandbox for \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\"" Jun 25 16:28:21.752713 containerd[1325]: time="2024-06-25T16:28:21.726344311Z" level=info msg="StopPodSandbox for \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\"" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.859 [INFO][3936] k8s.go 608: Cleaning up netns ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.860 [INFO][3936] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" iface="eth0" netns="/var/run/netns/cni-3a6020d9-33e5-fddb-a9ae-dab23c9c59a8" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.860 [INFO][3936] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" iface="eth0" netns="/var/run/netns/cni-3a6020d9-33e5-fddb-a9ae-dab23c9c59a8" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.860 [INFO][3936] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" iface="eth0" netns="/var/run/netns/cni-3a6020d9-33e5-fddb-a9ae-dab23c9c59a8" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.860 [INFO][3936] k8s.go 615: Releasing IP address(es) ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.861 [INFO][3936] utils.go 188: Calico CNI releasing IP address ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.907 [INFO][3949] ipam_plugin.go 411: Releasing address using handleID ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" HandleID="k8s-pod-network.a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.907 [INFO][3949] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.907 [INFO][3949] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.930 [WARNING][3949] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" HandleID="k8s-pod-network.a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.930 [INFO][3949] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" HandleID="k8s-pod-network.a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.931 [INFO][3949] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:21.935179 containerd[1325]: 2024-06-25 16:28:21.933 [INFO][3936] k8s.go 621: Teardown processing complete. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:28:21.936665 containerd[1325]: time="2024-06-25T16:28:21.935841904Z" level=info msg="TearDown network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\" successfully" Jun 25 16:28:21.936665 containerd[1325]: time="2024-06-25T16:28:21.935892603Z" level=info msg="StopPodSandbox for \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\" returns successfully" Jun 25 16:28:21.941920 systemd[1]: run-netns-cni\x2d3a6020d9\x2d33e5\x2dfddb\x2da9ae\x2ddab23c9c59a8.mount: Deactivated successfully. Jun 25 16:28:21.943359 containerd[1325]: time="2024-06-25T16:28:21.943297154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtzcb,Uid:6bca2029-8200-41b2-a394-21e05e7bb1ca,Namespace:calico-system,Attempt:1,}" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.896 [INFO][3938] k8s.go 608: Cleaning up netns ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.896 [INFO][3938] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" iface="eth0" netns="/var/run/netns/cni-9852a67d-04da-e98c-5548-7fefef7d9179" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.896 [INFO][3938] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" iface="eth0" netns="/var/run/netns/cni-9852a67d-04da-e98c-5548-7fefef7d9179" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.897 [INFO][3938] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" iface="eth0" netns="/var/run/netns/cni-9852a67d-04da-e98c-5548-7fefef7d9179" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.897 [INFO][3938] k8s.go 615: Releasing IP address(es) ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.897 [INFO][3938] utils.go 188: Calico CNI releasing IP address ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.948 [INFO][3954] ipam_plugin.go 411: Releasing address using handleID ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" HandleID="k8s-pod-network.55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.948 [INFO][3954] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.948 [INFO][3954] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.956 [WARNING][3954] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" HandleID="k8s-pod-network.55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.956 [INFO][3954] ipam_plugin.go 439: Releasing address using workloadID ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" HandleID="k8s-pod-network.55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.958 [INFO][3954] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:21.963885 containerd[1325]: 2024-06-25 16:28:21.962 [INFO][3938] k8s.go 621: Teardown processing complete. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:28:21.969497 containerd[1325]: time="2024-06-25T16:28:21.964679330Z" level=info msg="TearDown network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\" successfully" Jun 25 16:28:21.969497 containerd[1325]: time="2024-06-25T16:28:21.964719598Z" level=info msg="StopPodSandbox for \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\" returns successfully" Jun 25 16:28:21.969497 containerd[1325]: time="2024-06-25T16:28:21.968119115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-twgps,Uid:92239340-c0df-4316-9d55-1b079bb7d1ce,Namespace:kube-system,Attempt:1,}" Jun 25 16:28:21.968827 systemd[1]: run-netns-cni\x2d9852a67d\x2d04da\x2de98c\x2d5548\x2d7fefef7d9179.mount: Deactivated successfully. Jun 25 16:28:22.216000 audit[3997]: NETFILTER_CFG table=filter:103 family=2 entries=11 op=nft_register_rule pid=3997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:22.216000 audit[3997]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffc4e45af0 a2=0 a3=7fffc4e45adc items=0 ppid=2548 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:22.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:22.219000 audit[3997]: NETFILTER_CFG table=nat:104 family=2 entries=35 op=nft_register_chain pid=3997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:22.219000 audit[3997]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffc4e45af0 a2=0 a3=7fffc4e45adc items=0 ppid=2548 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:22.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:22.316265 systemd-networkd[1092]: cali62adc4ef477: Gained IPv6LL Jun 25 16:28:22.377651 systemd-networkd[1092]: calie47b690226b: Link UP Jun 25 16:28:22.378888 systemd-networkd[1092]: vxlan.calico: Gained IPv6LL Jun 25 16:28:22.382716 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:28:22.382829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie47b690226b: link becomes ready Jun 25 16:28:22.383140 systemd-networkd[1092]: calie47b690226b: Gained carrier Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.099 [INFO][3961] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0 csi-node-driver- calico-system 6bca2029-8200-41b2-a394-21e05e7bb1ca 841 0 2024-06-25 16:27:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3815-2-4-3-54e11b9a94.novalocal csi-node-driver-xtzcb eth0 default [] [] [kns.calico-system ksa.calico-system.default] calie47b690226b [] []}} ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Namespace="calico-system" Pod="csi-node-driver-xtzcb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.099 [INFO][3961] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Namespace="calico-system" Pod="csi-node-driver-xtzcb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.244 [INFO][3984] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" HandleID="k8s-pod-network.c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.266 [INFO][3984] ipam_plugin.go 264: Auto assigning IP ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" HandleID="k8s-pod-network.c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815-2-4-3-54e11b9a94.novalocal", "pod":"csi-node-driver-xtzcb", "timestamp":"2024-06-25 16:28:22.244541075 +0000 UTC"}, Hostname:"ci-3815-2-4-3-54e11b9a94.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.266 [INFO][3984] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.266 [INFO][3984] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.266 [INFO][3984] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-3-54e11b9a94.novalocal' Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.269 [INFO][3984] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.280 [INFO][3984] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.300 [INFO][3984] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.307 [INFO][3984] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.312 [INFO][3984] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.312 [INFO][3984] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.314 [INFO][3984] ipam.go 1685: Creating new handle: k8s-pod-network.c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.332 [INFO][3984] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.341 [INFO][3984] ipam.go 1216: Successfully claimed IPs: [192.168.32.131/26] block=192.168.32.128/26 handle="k8s-pod-network.c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.341 [INFO][3984] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.131/26] handle="k8s-pod-network.c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.341 [INFO][3984] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:22.417313 containerd[1325]: 2024-06-25 16:28:22.341 [INFO][3984] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.131/26] IPv6=[] ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" HandleID="k8s-pod-network.c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:22.418511 containerd[1325]: 2024-06-25 16:28:22.354 [INFO][3961] k8s.go 386: Populated endpoint ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Namespace="calico-system" Pod="csi-node-driver-xtzcb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bca2029-8200-41b2-a394-21e05e7bb1ca", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"", Pod:"csi-node-driver-xtzcb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie47b690226b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:22.418511 containerd[1325]: 2024-06-25 16:28:22.354 [INFO][3961] k8s.go 387: Calico CNI using IPs: [192.168.32.131/32] ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Namespace="calico-system" Pod="csi-node-driver-xtzcb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:22.418511 containerd[1325]: 2024-06-25 16:28:22.354 [INFO][3961] dataplane_linux.go 68: Setting the host side veth name to calie47b690226b ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Namespace="calico-system" Pod="csi-node-driver-xtzcb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:22.418511 containerd[1325]: 2024-06-25 16:28:22.383 [INFO][3961] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Namespace="calico-system" Pod="csi-node-driver-xtzcb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:22.418511 containerd[1325]: 2024-06-25 16:28:22.391 [INFO][3961] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Namespace="calico-system" Pod="csi-node-driver-xtzcb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bca2029-8200-41b2-a394-21e05e7bb1ca", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c", Pod:"csi-node-driver-xtzcb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie47b690226b", MAC:"42:cd:ca:2d:28:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:22.418511 containerd[1325]: 2024-06-25 16:28:22.414 [INFO][3961] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c" Namespace="calico-system" Pod="csi-node-driver-xtzcb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:28:22.430000 audit[4010]: NETFILTER_CFG table=filter:105 family=2 entries=38 op=nft_register_chain pid=4010 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:28:22.430000 audit[4010]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7ffd38064020 a2=0 a3=7ffd3806400c items=0 ppid=3548 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:22.430000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:28:22.439938 systemd-networkd[1092]: cali6689c856e21: Link UP Jun 25 16:28:22.445298 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6689c856e21: link becomes ready Jun 25 16:28:22.445203 systemd-networkd[1092]: cali6689c856e21: Gained carrier Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.156 [INFO][3972] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0 coredns-5dd5756b68- kube-system 92239340-c0df-4316-9d55-1b079bb7d1ce 842 0 2024-06-25 16:27:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815-2-4-3-54e11b9a94.novalocal coredns-5dd5756b68-twgps eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6689c856e21 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Namespace="kube-system" Pod="coredns-5dd5756b68-twgps" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.156 [INFO][3972] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Namespace="kube-system" Pod="coredns-5dd5756b68-twgps" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.289 [INFO][3991] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" HandleID="k8s-pod-network.f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.311 [INFO][3991] ipam_plugin.go 264: Auto assigning IP ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" HandleID="k8s-pod-network.f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003087e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815-2-4-3-54e11b9a94.novalocal", "pod":"coredns-5dd5756b68-twgps", "timestamp":"2024-06-25 16:28:22.289860934 +0000 UTC"}, Hostname:"ci-3815-2-4-3-54e11b9a94.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.311 [INFO][3991] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.341 [INFO][3991] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.341 [INFO][3991] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-3-54e11b9a94.novalocal' Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.344 [INFO][3991] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.361 [INFO][3991] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.368 [INFO][3991] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.371 [INFO][3991] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.376 [INFO][3991] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.376 [INFO][3991] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.384 [INFO][3991] ipam.go 1685: Creating new handle: k8s-pod-network.f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5 Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.394 [INFO][3991] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.403 [INFO][3991] ipam.go 1216: Successfully claimed IPs: [192.168.32.132/26] block=192.168.32.128/26 handle="k8s-pod-network.f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.403 [INFO][3991] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.132/26] handle="k8s-pod-network.f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.403 [INFO][3991] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:22.469963 containerd[1325]: 2024-06-25 16:28:22.403 [INFO][3991] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.132/26] IPv6=[] ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" HandleID="k8s-pod-network.f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:22.470723 containerd[1325]: 2024-06-25 16:28:22.421 [INFO][3972] k8s.go 386: Populated endpoint ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Namespace="kube-system" Pod="coredns-5dd5756b68-twgps" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"92239340-c0df-4316-9d55-1b079bb7d1ce", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"", Pod:"coredns-5dd5756b68-twgps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6689c856e21", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:22.470723 containerd[1325]: 2024-06-25 16:28:22.425 [INFO][3972] k8s.go 387: Calico CNI using IPs: [192.168.32.132/32] ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Namespace="kube-system" Pod="coredns-5dd5756b68-twgps" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:22.470723 containerd[1325]: 2024-06-25 16:28:22.425 [INFO][3972] dataplane_linux.go 68: Setting the host side veth name to cali6689c856e21 ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Namespace="kube-system" Pod="coredns-5dd5756b68-twgps" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:22.470723 containerd[1325]: 2024-06-25 16:28:22.446 [INFO][3972] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Namespace="kube-system" Pod="coredns-5dd5756b68-twgps" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:22.470723 containerd[1325]: 2024-06-25 16:28:22.446 [INFO][3972] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Namespace="kube-system" Pod="coredns-5dd5756b68-twgps" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"92239340-c0df-4316-9d55-1b079bb7d1ce", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5", Pod:"coredns-5dd5756b68-twgps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6689c856e21", MAC:"2e:c1:b4:30:44:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:22.470723 containerd[1325]: 2024-06-25 16:28:22.465 [INFO][3972] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5" Namespace="kube-system" Pod="coredns-5dd5756b68-twgps" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:28:22.491000 audit[4038]: NETFILTER_CFG table=filter:106 family=2 entries=44 op=nft_register_chain pid=4038 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:28:22.491000 audit[4038]: SYSCALL arch=c000003e syscall=46 success=yes exit=22260 a0=3 a1=7ffd0955ee80 a2=0 a3=7ffd0955ee6c items=0 ppid=3548 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:22.491000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:28:22.495742 containerd[1325]: time="2024-06-25T16:28:22.494935987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:22.495742 containerd[1325]: time="2024-06-25T16:28:22.495027956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:22.495742 containerd[1325]: time="2024-06-25T16:28:22.495046873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:22.495742 containerd[1325]: time="2024-06-25T16:28:22.495059498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:22.543546 systemd[1]: run-containerd-runc-k8s.io-c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c-runc.1EYQLs.mount: Deactivated successfully. Jun 25 16:28:22.545947 containerd[1325]: time="2024-06-25T16:28:22.545496137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:22.546135 containerd[1325]: time="2024-06-25T16:28:22.546108729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:22.546254 containerd[1325]: time="2024-06-25T16:28:22.546213723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:22.546375 containerd[1325]: time="2024-06-25T16:28:22.546350770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:22.569533 containerd[1325]: time="2024-06-25T16:28:22.569484706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtzcb,Uid:6bca2029-8200-41b2-a394-21e05e7bb1ca,Namespace:calico-system,Attempt:1,} returns sandbox id \"c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c\"" Jun 25 16:28:22.608000 audit[3900]: USER_ACCT pid=3900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:22.610068 sshd[3900]: Accepted publickey for core from 172.24.4.1 port 48732 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:28:22.609000 audit[3900]: CRED_ACQ pid=3900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:22.609000 audit[3900]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd94cb24f0 a2=3 a3=7fc20d28a480 items=0 ppid=1 pid=3900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:22.609000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:22.615567 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:22.624898 systemd-logind[1302]: New session 9 of user core. Jun 25 16:28:22.630568 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:28:22.642000 audit[3900]: USER_START pid=3900 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:22.643000 audit[4107]: CRED_ACQ pid=4107 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:22.660861 containerd[1325]: time="2024-06-25T16:28:22.660812604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-twgps,Uid:92239340-c0df-4316-9d55-1b079bb7d1ce,Namespace:kube-system,Attempt:1,} returns sandbox id \"f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5\"" Jun 25 16:28:22.664217 containerd[1325]: time="2024-06-25T16:28:22.663440003Z" level=info msg="CreateContainer within sandbox \"f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:28:22.693250 containerd[1325]: time="2024-06-25T16:28:22.693196979Z" level=info msg="CreateContainer within sandbox \"f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e0a0071debf1a93185f9eaf4cee9028f43c12fd0035f50e3ce912021f42b59c\"" Jun 25 16:28:22.695551 containerd[1325]: time="2024-06-25T16:28:22.695000756Z" level=info msg="StartContainer for \"0e0a0071debf1a93185f9eaf4cee9028f43c12fd0035f50e3ce912021f42b59c\"" Jun 25 16:28:22.768796 containerd[1325]: time="2024-06-25T16:28:22.768670937Z" level=info msg="StartContainer for \"0e0a0071debf1a93185f9eaf4cee9028f43c12fd0035f50e3ce912021f42b59c\" returns successfully" Jun 25 16:28:23.275850 kubelet[2373]: I0625 16:28:23.275800 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-twgps" podStartSLOduration=60.275759225 podCreationTimestamp="2024-06-25 16:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:23.233871447 +0000 UTC m=+71.776905592" watchObservedRunningTime="2024-06-25 16:28:23.275759225 +0000 UTC m=+71.818793330" Jun 25 16:28:23.459000 audit[4169]: NETFILTER_CFG table=filter:107 family=2 entries=8 op=nft_register_rule pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:23.459000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffefa79a850 a2=0 a3=7ffefa79a83c items=0 ppid=2548 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:23.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:23.463000 audit[4169]: NETFILTER_CFG table=nat:108 family=2 entries=44 op=nft_register_rule pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:23.463000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffefa79a850 a2=0 a3=7ffefa79a83c items=0 ppid=2548 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:23.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:23.476000 audit[4171]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:23.476000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd9c1b63f0 a2=0 a3=7ffd9c1b63dc items=0 ppid=2548 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:23.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:23.544000 audit[4171]: NETFILTER_CFG table=nat:110 family=2 entries=56 op=nft_register_chain pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:23.544000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd9c1b63f0 a2=0 a3=7ffd9c1b63dc items=0 ppid=2548 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:23.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:23.688309 sshd[3900]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:23.688000 audit[3900]: USER_END pid=3900 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:23.688000 audit[3900]: CRED_DISP pid=3900 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:23.691624 systemd[1]: sshd@8-172.24.4.182:22-172.24.4.1:48732.service: Deactivated successfully. Jun 25 16:28:23.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.182:22-172.24.4.1:48732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:23.693046 systemd-logind[1302]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:28:23.693085 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:28:23.694678 systemd-logind[1302]: Removed session 9. Jun 25 16:28:24.043859 systemd-networkd[1092]: cali6689c856e21: Gained IPv6LL Jun 25 16:28:24.236113 systemd-networkd[1092]: calie47b690226b: Gained IPv6LL Jun 25 16:28:24.356431 containerd[1325]: time="2024-06-25T16:28:24.356284528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:24.358184 containerd[1325]: time="2024-06-25T16:28:24.358125222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:28:24.359752 containerd[1325]: time="2024-06-25T16:28:24.359710279Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:24.395475 containerd[1325]: time="2024-06-25T16:28:24.395421700Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:24.397858 containerd[1325]: time="2024-06-25T16:28:24.397811339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:24.398856 containerd[1325]: time="2024-06-25T16:28:24.398799027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.758576102s" Jun 25 16:28:24.398920 containerd[1325]: time="2024-06-25T16:28:24.398858372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:28:24.400672 containerd[1325]: time="2024-06-25T16:28:24.400647686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:28:24.417614 containerd[1325]: time="2024-06-25T16:28:24.417572614Z" level=info msg="CreateContainer within sandbox \"0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:28:24.438349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1706707099.mount: Deactivated successfully. Jun 25 16:28:24.450568 containerd[1325]: time="2024-06-25T16:28:24.450508918Z" level=info msg="CreateContainer within sandbox \"0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c5858b3cf2376661e32d8be133805efd05e2b738ff9df74470a4b252de6ac6fa\"" Jun 25 16:28:24.451275 containerd[1325]: time="2024-06-25T16:28:24.451255848Z" level=info msg="StartContainer for \"c5858b3cf2376661e32d8be133805efd05e2b738ff9df74470a4b252de6ac6fa\"" Jun 25 16:28:24.945801 containerd[1325]: time="2024-06-25T16:28:24.945684458Z" level=info msg="StartContainer for \"c5858b3cf2376661e32d8be133805efd05e2b738ff9df74470a4b252de6ac6fa\" returns successfully" Jun 25 16:28:25.339668 kubelet[2373]: I0625 16:28:25.339532 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c86b8c874-zq2rx" podStartSLOduration=51.559253435 podCreationTimestamp="2024-06-25 16:27:30 +0000 UTC" firstStartedPulling="2024-06-25 16:28:20.619080018 +0000 UTC m=+69.162114123" lastFinishedPulling="2024-06-25 16:28:24.399319368 +0000 UTC m=+72.942353473" observedRunningTime="2024-06-25 16:28:25.251539093 +0000 UTC m=+73.794573208" watchObservedRunningTime="2024-06-25 16:28:25.339492785 +0000 UTC m=+73.882526890" Jun 25 16:28:26.983363 containerd[1325]: time="2024-06-25T16:28:26.983312507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:26.986721 containerd[1325]: time="2024-06-25T16:28:26.986673853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:28:26.992811 containerd[1325]: time="2024-06-25T16:28:26.992786101Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:26.994640 containerd[1325]: time="2024-06-25T16:28:26.994595027Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:26.996332 containerd[1325]: time="2024-06-25T16:28:26.996294782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:26.997194 containerd[1325]: time="2024-06-25T16:28:26.997165019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.596409894s" Jun 25 16:28:26.997285 containerd[1325]: time="2024-06-25T16:28:26.997266366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:28:27.000152 containerd[1325]: time="2024-06-25T16:28:27.000122352Z" level=info msg="CreateContainer within sandbox \"c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:28:27.026180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710777846.mount: Deactivated successfully. Jun 25 16:28:27.031900 containerd[1325]: time="2024-06-25T16:28:27.031846573Z" level=info msg="CreateContainer within sandbox \"c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"903e5eb792c5852f90903bcc85440122e7bb21d41578fbfd12f32430d43f686f\"" Jun 25 16:28:27.033112 containerd[1325]: time="2024-06-25T16:28:27.032850548Z" level=info msg="StartContainer for \"903e5eb792c5852f90903bcc85440122e7bb21d41578fbfd12f32430d43f686f\"" Jun 25 16:28:27.121374 containerd[1325]: time="2024-06-25T16:28:27.121325756Z" level=info msg="StartContainer for \"903e5eb792c5852f90903bcc85440122e7bb21d41578fbfd12f32430d43f686f\" returns successfully" Jun 25 16:28:27.125650 containerd[1325]: time="2024-06-25T16:28:27.125232814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:28:28.024370 systemd[1]: run-containerd-runc-k8s.io-903e5eb792c5852f90903bcc85440122e7bb21d41578fbfd12f32430d43f686f-runc.IpEtli.mount: Deactivated successfully. Jun 25 16:28:28.705066 systemd[1]: Started sshd@9-172.24.4.182:22-172.24.4.1:42844.service - OpenSSH per-connection server daemon (172.24.4.1:42844). Jun 25 16:28:28.713613 kernel: kauditd_printk_skb: 94 callbacks suppressed Jun 25 16:28:28.713724 kernel: audit: type=1130 audit(1719332908.704:332): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.182:22-172.24.4.1:42844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.182:22-172.24.4.1:42844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.944000 audit[4283]: USER_ACCT pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:29.951395 kernel: audit: type=1101 audit(1719332909.944:333): pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:29.951811 sshd[4283]: Accepted publickey for core from 172.24.4.1 port 42844 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:28:29.972302 kernel: audit: type=1103 audit(1719332909.952:334): pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:29.972418 kernel: audit: type=1006 audit(1719332909.952:335): pid=4283 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:28:29.972471 kernel: audit: type=1300 audit(1719332909.952:335): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1ec8b530 a2=3 a3=7fe5ee439480 items=0 ppid=1 pid=4283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:29.972530 kernel: audit: type=1327 audit(1719332909.952:335): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:29.952000 audit[4283]: CRED_ACQ pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:29.952000 audit[4283]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1ec8b530 a2=3 a3=7fe5ee439480 items=0 ppid=1 pid=4283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:29.952000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:29.973028 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:29.983749 systemd-logind[1302]: New session 10 of user core. Jun 25 16:28:29.989476 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:28:30.005769 kernel: audit: type=1105 audit(1719332909.999:336): pid=4283 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:29.999000 audit[4283]: USER_START pid=4283 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:30.010032 kernel: audit: type=1103 audit(1719332910.006:337): pid=4286 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:30.006000 audit[4286]: CRED_ACQ pid=4286 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:30.664788 containerd[1325]: time="2024-06-25T16:28:30.664735761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:30.667812 containerd[1325]: time="2024-06-25T16:28:30.667771258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:28:30.670739 containerd[1325]: time="2024-06-25T16:28:30.670687134Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:30.674196 containerd[1325]: time="2024-06-25T16:28:30.674113808Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:30.679562 containerd[1325]: time="2024-06-25T16:28:30.679526792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:30.680805 containerd[1325]: time="2024-06-25T16:28:30.680774813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 3.554947298s" Jun 25 16:28:30.680924 containerd[1325]: time="2024-06-25T16:28:30.680901840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:28:30.689091 containerd[1325]: time="2024-06-25T16:28:30.687861613Z" level=info msg="CreateContainer within sandbox \"c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:28:30.726954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2225970675.mount: Deactivated successfully. Jun 25 16:28:30.729267 containerd[1325]: time="2024-06-25T16:28:30.729224097Z" level=info msg="CreateContainer within sandbox \"c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b75ae51eb7bbbd22ce9c0f83597d5451c7f0bc7f1c3682df0152db01e0f4e9b6\"" Jun 25 16:28:30.734659 containerd[1325]: time="2024-06-25T16:28:30.731417927Z" level=info msg="StartContainer for \"b75ae51eb7bbbd22ce9c0f83597d5451c7f0bc7f1c3682df0152db01e0f4e9b6\"" Jun 25 16:28:30.909878 containerd[1325]: time="2024-06-25T16:28:30.909803888Z" level=info msg="StartContainer for \"b75ae51eb7bbbd22ce9c0f83597d5451c7f0bc7f1c3682df0152db01e0f4e9b6\" returns successfully" Jun 25 16:28:31.109934 kubelet[2373]: I0625 16:28:31.109800 2373 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:28:31.112953 kubelet[2373]: I0625 16:28:31.112914 2373 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:28:31.233573 sshd[4283]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:31.236000 audit[4283]: USER_END pid=4283 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:31.236000 audit[4283]: CRED_DISP pid=4283 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:31.243787 kernel: audit: type=1106 audit(1719332911.236:338): pid=4283 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:31.243874 kernel: audit: type=1104 audit(1719332911.236:339): pid=4283 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:31.244491 systemd[1]: Started sshd@10-172.24.4.182:22-172.24.4.1:42852.service - OpenSSH per-connection server daemon (172.24.4.1:42852). Jun 25 16:28:31.245202 systemd[1]: sshd@9-172.24.4.182:22-172.24.4.1:42844.service: Deactivated successfully. Jun 25 16:28:31.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.182:22-172.24.4.1:42852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:31.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.182:22-172.24.4.1:42844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:31.249380 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:28:31.249412 systemd-logind[1302]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:28:31.255884 systemd-logind[1302]: Removed session 10. Jun 25 16:28:31.276828 kubelet[2373]: I0625 16:28:31.270490 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-xtzcb" podStartSLOduration=53.158402747 podCreationTimestamp="2024-06-25 16:27:30 +0000 UTC" firstStartedPulling="2024-06-25 16:28:22.571180564 +0000 UTC m=+71.114214669" lastFinishedPulling="2024-06-25 16:28:30.683213227 +0000 UTC m=+79.226247382" observedRunningTime="2024-06-25 16:28:31.269858746 +0000 UTC m=+79.812892861" watchObservedRunningTime="2024-06-25 16:28:31.27043546 +0000 UTC m=+79.813469565" Jun 25 16:28:31.708843 systemd[1]: run-containerd-runc-k8s.io-b75ae51eb7bbbd22ce9c0f83597d5451c7f0bc7f1c3682df0152db01e0f4e9b6-runc.coghDK.mount: Deactivated successfully. Jun 25 16:28:32.493000 audit[4346]: USER_ACCT pid=4346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:32.494000 audit[4346]: CRED_ACQ pid=4346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:32.495000 audit[4346]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2e813bf0 a2=3 a3=7f9955906480 items=0 ppid=1 pid=4346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:32.495000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:32.496390 sshd[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:32.500951 sshd[4346]: Accepted publickey for core from 172.24.4.1 port 42852 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:28:32.514373 systemd-logind[1302]: New session 11 of user core. Jun 25 16:28:32.515314 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:28:32.531000 audit[4346]: USER_START pid=4346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:32.533000 audit[4353]: CRED_ACQ pid=4353 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:33.675930 sshd[4346]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:33.678000 audit[4346]: USER_END pid=4346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:33.678000 audit[4346]: CRED_DISP pid=4346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:33.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.182:22-172.24.4.1:42866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:33.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.182:22-172.24.4.1:42852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:33.686757 systemd[1]: Started sshd@11-172.24.4.182:22-172.24.4.1:42866.service - OpenSSH per-connection server daemon (172.24.4.1:42866). Jun 25 16:28:33.687888 systemd[1]: sshd@10-172.24.4.182:22-172.24.4.1:42852.service: Deactivated successfully. Jun 25 16:28:33.692930 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:28:33.693786 systemd-logind[1302]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:28:33.695865 systemd-logind[1302]: Removed session 11. Jun 25 16:28:35.005190 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:28:35.005357 kernel: audit: type=1101 audit(1719332915.001:351): pid=4359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:35.001000 audit[4359]: USER_ACCT pid=4359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:35.005468 sshd[4359]: Accepted publickey for core from 172.24.4.1 port 42866 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:28:35.006000 audit[4359]: CRED_ACQ pid=4359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:35.009592 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:35.014034 kernel: audit: type=1103 audit(1719332915.006:352): pid=4359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:35.008000 audit[4359]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbe02f020 a2=3 a3=7f5d90a9a480 items=0 ppid=1 pid=4359 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.017895 kernel: audit: type=1006 audit(1719332915.008:353): pid=4359 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 16:28:35.018059 kernel: audit: type=1300 audit(1719332915.008:353): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbe02f020 a2=3 a3=7f5d90a9a480 items=0 ppid=1 pid=4359 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:35.026005 kernel: audit: type=1327 audit(1719332915.008:353): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:35.008000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:35.054162 systemd-logind[1302]: New session 12 of user core. Jun 25 16:28:35.068339 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:28:35.073000 audit[4359]: USER_START pid=4359 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:35.080024 kernel: audit: type=1105 audit(1719332915.073:354): pid=4359 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:35.080000 audit[4364]: CRED_ACQ pid=4364 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:35.088163 kernel: audit: type=1103 audit(1719332915.080:355): pid=4364 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:36.056972 sshd[4359]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:36.058000 audit[4359]: USER_END pid=4359 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:36.064152 kernel: audit: type=1106 audit(1719332916.058:356): pid=4359 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:36.058000 audit[4359]: CRED_DISP pid=4359 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:36.065853 systemd-logind[1302]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:28:36.067910 systemd[1]: sshd@11-172.24.4.182:22-172.24.4.1:42866.service: Deactivated successfully. Jun 25 16:28:36.069741 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:28:36.074120 kernel: audit: type=1104 audit(1719332916.058:357): pid=4359 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:36.073417 systemd-logind[1302]: Removed session 12. Jun 25 16:28:36.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.182:22-172.24.4.1:42866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:36.084082 kernel: audit: type=1131 audit(1719332916.063:358): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.182:22-172.24.4.1:42866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:41.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.182:22-172.24.4.1:33088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:41.070804 systemd[1]: Started sshd@12-172.24.4.182:22-172.24.4.1:33088.service - OpenSSH per-connection server daemon (172.24.4.1:33088). Jun 25 16:28:41.084111 kernel: audit: type=1130 audit(1719332921.070:359): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.182:22-172.24.4.1:33088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:41.354366 systemd[1]: run-containerd-runc-k8s.io-c5858b3cf2376661e32d8be133805efd05e2b738ff9df74470a4b252de6ac6fa-runc.mIQAFl.mount: Deactivated successfully. Jun 25 16:28:41.536250 systemd[1]: run-containerd-runc-k8s.io-a99be7bb007731ab8e889dc0471684535fa29ace979dacc8415a6a4c0cd60a58-runc.quusaP.mount: Deactivated successfully. Jun 25 16:28:42.437000 audit[4383]: USER_ACCT pid=4383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:42.438953 sshd[4383]: Accepted publickey for core from 172.24.4.1 port 33088 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:28:42.442327 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:42.448054 kernel: audit: type=1101 audit(1719332922.437:360): pid=4383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:42.440000 audit[4383]: CRED_ACQ pid=4383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:42.462095 kernel: audit: type=1103 audit(1719332922.440:361): pid=4383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:42.462353 kernel: audit: type=1006 audit(1719332922.440:362): pid=4383 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 16:28:42.440000 audit[4383]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe595239f0 a2=3 a3=7ffa63038480 items=0 ppid=1 pid=4383 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:42.486375 kernel: audit: type=1300 audit(1719332922.440:362): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe595239f0 a2=3 a3=7ffa63038480 items=0 ppid=1 pid=4383 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:42.487199 kernel: audit: type=1327 audit(1719332922.440:362): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:42.440000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:42.499638 systemd-logind[1302]: New session 13 of user core. Jun 25 16:28:42.503583 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:28:42.528000 audit[4383]: USER_START pid=4383 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:42.540726 kernel: audit: type=1105 audit(1719332922.528:363): pid=4383 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:42.546032 kernel: audit: type=1103 audit(1719332922.539:364): pid=4433 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:42.539000 audit[4433]: CRED_ACQ pid=4433 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:43.271379 sshd[4383]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:43.272000 audit[4383]: USER_END pid=4383 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:43.276809 systemd[1]: sshd@12-172.24.4.182:22-172.24.4.1:33088.service: Deactivated successfully. Jun 25 16:28:43.278332 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:28:43.284192 systemd-logind[1302]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:28:43.292533 kernel: audit: type=1106 audit(1719332923.272:365): pid=4383 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:43.292649 kernel: audit: type=1104 audit(1719332923.272:366): pid=4383 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:43.272000 audit[4383]: CRED_DISP pid=4383 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:43.293665 systemd-logind[1302]: Removed session 13. Jun 25 16:28:43.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.182:22-172.24.4.1:33088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:48.285779 systemd[1]: Started sshd@13-172.24.4.182:22-172.24.4.1:59108.service - OpenSSH per-connection server daemon (172.24.4.1:59108). Jun 25 16:28:48.288182 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:48.288290 kernel: audit: type=1130 audit(1719332928.285:368): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.182:22-172.24.4.1:59108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:48.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.182:22-172.24.4.1:59108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:49.738000 audit[4444]: USER_ACCT pid=4444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:49.741783 sshd[4444]: Accepted publickey for core from 172.24.4.1 port 59108 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:28:49.744271 sshd[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:49.754148 kernel: audit: type=1101 audit(1719332929.738:369): pid=4444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:49.754291 kernel: audit: type=1103 audit(1719332929.738:370): pid=4444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:49.738000 audit[4444]: CRED_ACQ pid=4444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:49.758472 systemd-logind[1302]: New session 14 of user core. Jun 25 16:28:49.764603 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:28:49.767123 kernel: audit: type=1006 audit(1719332929.743:371): pid=4444 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 16:28:49.743000 audit[4444]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb540a420 a2=3 a3=7fc31e586480 items=0 ppid=1 pid=4444 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:49.782050 kernel: audit: type=1300 audit(1719332929.743:371): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb540a420 a2=3 a3=7fc31e586480 items=0 ppid=1 pid=4444 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:49.743000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:49.789038 kernel: audit: type=1327 audit(1719332929.743:371): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:49.786000 audit[4444]: USER_START pid=4444 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:49.804333 kernel: audit: type=1105 audit(1719332929.786:372): pid=4444 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:49.802000 audit[4447]: CRED_ACQ pid=4447 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:49.813235 kernel: audit: type=1103 audit(1719332929.802:373): pid=4447 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:50.741199 sshd[4444]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:50.741000 audit[4444]: USER_END pid=4444 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:50.748191 systemd-logind[1302]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:28:50.749775 systemd[1]: sshd@13-172.24.4.182:22-172.24.4.1:59108.service: Deactivated successfully. Jun 25 16:28:50.750615 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:28:50.752299 systemd-logind[1302]: Removed session 14. Jun 25 16:28:50.753034 kernel: audit: type=1106 audit(1719332930.741:374): pid=4444 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:50.741000 audit[4444]: CRED_DISP pid=4444 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:50.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.182:22-172.24.4.1:59108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:50.763049 kernel: audit: type=1104 audit(1719332930.741:375): pid=4444 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:53.163965 systemd[1]: run-containerd-runc-k8s.io-c5858b3cf2376661e32d8be133805efd05e2b738ff9df74470a4b252de6ac6fa-runc.qkMYDo.mount: Deactivated successfully. Jun 25 16:28:55.755722 systemd[1]: Started sshd@14-172.24.4.182:22-172.24.4.1:53816.service - OpenSSH per-connection server daemon (172.24.4.1:53816). Jun 25 16:28:55.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.182:22-172.24.4.1:53816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:55.768703 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:55.768866 kernel: audit: type=1130 audit(1719332935.756:377): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.182:22-172.24.4.1:53816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:57.110000 audit[4483]: USER_ACCT pid=4483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:57.118368 sshd[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:57.121352 kernel: audit: type=1101 audit(1719332937.110:378): pid=4483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:57.121428 sshd[4483]: Accepted publickey for core from 172.24.4.1 port 53816 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:28:57.115000 audit[4483]: CRED_ACQ pid=4483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:57.133866 kernel: audit: type=1103 audit(1719332937.115:379): pid=4483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:57.134040 kernel: audit: type=1006 audit(1719332937.115:380): pid=4483 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:28:57.135123 systemd-logind[1302]: New session 15 of user core. Jun 25 16:28:57.162743 kernel: audit: type=1300 audit(1719332937.115:380): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc72e43940 a2=3 a3=7f66aade3480 items=0 ppid=1 pid=4483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:57.162888 kernel: audit: type=1327 audit(1719332937.115:380): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:57.115000 audit[4483]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc72e43940 a2=3 a3=7f66aade3480 items=0 ppid=1 pid=4483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:57.115000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:57.160423 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:28:57.182000 audit[4483]: USER_START pid=4483 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:57.194598 kernel: audit: type=1105 audit(1719332937.182:381): pid=4483 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:57.185000 audit[4486]: CRED_ACQ pid=4486 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:57.201022 kernel: audit: type=1103 audit(1719332937.185:382): pid=4486 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:58.032096 sshd[4483]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:58.052000 audit[4483]: USER_END pid=4483 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:58.059271 systemd[1]: Started sshd@15-172.24.4.182:22-172.24.4.1:53824.service - OpenSSH per-connection server daemon (172.24.4.1:53824). Jun 25 16:28:58.064903 kernel: audit: type=1106 audit(1719332938.052:383): pid=4483 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:58.052000 audit[4483]: CRED_DISP pid=4483 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:58.068095 systemd[1]: sshd@14-172.24.4.182:22-172.24.4.1:53816.service: Deactivated successfully. Jun 25 16:28:58.069892 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:28:58.074064 systemd-logind[1302]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:28:58.076050 kernel: audit: type=1104 audit(1719332938.052:384): pid=4483 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:58.076611 systemd-logind[1302]: Removed session 15. Jun 25 16:28:58.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.182:22-172.24.4.1:53824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:58.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.182:22-172.24.4.1:53816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:59.365000 audit[4494]: USER_ACCT pid=4494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:59.366760 sshd[4494]: Accepted publickey for core from 172.24.4.1 port 53824 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:28:59.367000 audit[4494]: CRED_ACQ pid=4494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:59.368000 audit[4494]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb8891430 a2=3 a3=7f630325d480 items=0 ppid=1 pid=4494 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:59.368000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:59.370502 sshd[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:59.382647 systemd-logind[1302]: New session 16 of user core. Jun 25 16:28:59.387624 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:28:59.400000 audit[4494]: USER_START pid=4494 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:28:59.403000 audit[4499]: CRED_ACQ pid=4499 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:00.551292 sshd[4494]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:00.554000 audit[4494]: USER_END pid=4494 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:00.554000 audit[4494]: CRED_DISP pid=4494 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:00.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.182:22-172.24.4.1:53832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:00.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.182:22-172.24.4.1:53824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:00.560652 systemd[1]: Started sshd@16-172.24.4.182:22-172.24.4.1:53832.service - OpenSSH per-connection server daemon (172.24.4.1:53832). Jun 25 16:29:00.561754 systemd[1]: sshd@15-172.24.4.182:22-172.24.4.1:53824.service: Deactivated successfully. Jun 25 16:29:00.567073 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:29:00.569563 systemd-logind[1302]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:29:00.575820 systemd-logind[1302]: Removed session 16. Jun 25 16:29:02.008000 audit[4505]: USER_ACCT pid=4505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:02.011078 sshd[4505]: Accepted publickey for core from 172.24.4.1 port 53832 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:29:02.012165 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:29:02.012259 kernel: audit: type=1101 audit(1719332942.008:396): pid=4505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:02.012859 sshd[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:02.010000 audit[4505]: CRED_ACQ pid=4505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:02.023267 kernel: audit: type=1103 audit(1719332942.010:397): pid=4505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:02.024435 kernel: audit: type=1006 audit(1719332942.011:398): pid=4505 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:29:02.011000 audit[4505]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcf457880 a2=3 a3=7f0b54354480 items=0 ppid=1 pid=4505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:02.036248 kernel: audit: type=1300 audit(1719332942.011:398): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcf457880 a2=3 a3=7f0b54354480 items=0 ppid=1 pid=4505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:02.036378 kernel: audit: type=1327 audit(1719332942.011:398): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:02.011000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:02.035627 systemd-logind[1302]: New session 17 of user core. Jun 25 16:29:02.037443 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:29:02.053000 audit[4505]: USER_START pid=4505 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:02.060535 kernel: audit: type=1105 audit(1719332942.053:399): pid=4505 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:02.057000 audit[4516]: CRED_ACQ pid=4516 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:02.066031 kernel: audit: type=1103 audit(1719332942.057:400): pid=4516 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:04.716000 audit[4533]: NETFILTER_CFG table=filter:111 family=2 entries=20 op=nft_register_rule pid=4533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:04.722024 kernel: audit: type=1325 audit(1719332944.716:401): table=filter:111 family=2 entries=20 op=nft_register_rule pid=4533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:04.716000 audit[4533]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffa7a78220 a2=0 a3=7fffa7a7820c items=0 ppid=2548 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.729065 kernel: audit: type=1300 audit(1719332944.716:401): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffa7a78220 a2=0 a3=7fffa7a7820c items=0 ppid=2548 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:04.734011 kernel: audit: type=1327 audit(1719332944.716:401): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:04.729000 audit[4533]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:04.729000 audit[4533]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffa7a78220 a2=0 a3=0 items=0 ppid=2548 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:04.750000 audit[4535]: NETFILTER_CFG table=filter:113 family=2 entries=32 op=nft_register_rule pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:04.750000 audit[4535]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff95e012c0 a2=0 a3=7fff95e012ac items=0 ppid=2548 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:04.752000 audit[4535]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:04.752000 audit[4535]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff95e012c0 a2=0 a3=0 items=0 ppid=2548 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:04.752000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:04.864368 sshd[4505]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:04.865000 audit[4505]: USER_END pid=4505 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:04.865000 audit[4505]: CRED_DISP pid=4505 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:04.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.182:22-172.24.4.1:56934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:04.875952 systemd[1]: Started sshd@17-172.24.4.182:22-172.24.4.1:56934.service - OpenSSH per-connection server daemon (172.24.4.1:56934). Jun 25 16:29:04.877249 systemd[1]: sshd@16-172.24.4.182:22-172.24.4.1:53832.service: Deactivated successfully. Jun 25 16:29:04.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.182:22-172.24.4.1:53832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:04.878761 systemd-logind[1302]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:29:04.879832 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:29:04.885553 systemd-logind[1302]: Removed session 17. Jun 25 16:29:06.054000 audit[4536]: USER_ACCT pid=4536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:06.056204 sshd[4536]: Accepted publickey for core from 172.24.4.1 port 56934 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:29:06.058000 audit[4536]: CRED_ACQ pid=4536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:06.058000 audit[4536]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff00e97930 a2=3 a3=7fa83cb23480 items=0 ppid=1 pid=4536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:06.058000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:06.059611 sshd[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:06.066787 systemd-logind[1302]: New session 18 of user core. Jun 25 16:29:06.076433 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:29:06.082000 audit[4536]: USER_START pid=4536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:06.084000 audit[4541]: CRED_ACQ pid=4541 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:08.497000 audit[4536]: USER_END pid=4536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:08.498112 sshd[4536]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:08.522318 kernel: kauditd_printk_skb: 20 callbacks suppressed Jun 25 16:29:08.522362 kernel: audit: type=1106 audit(1719332948.497:414): pid=4536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:08.522388 kernel: audit: type=1104 audit(1719332948.498:415): pid=4536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:08.498000 audit[4536]: CRED_DISP pid=4536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:08.522935 systemd[1]: Started sshd@18-172.24.4.182:22-172.24.4.1:56942.service - OpenSSH per-connection server daemon (172.24.4.1:56942). Jun 25 16:29:08.524306 systemd[1]: sshd@17-172.24.4.182:22-172.24.4.1:56934.service: Deactivated successfully. Jun 25 16:29:08.530788 kernel: audit: type=1130 audit(1719332948.522:416): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.182:22-172.24.4.1:56942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:08.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.182:22-172.24.4.1:56942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:08.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.182:22-172.24.4.1:56934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:08.531884 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:29:08.535073 kernel: audit: type=1131 audit(1719332948.524:417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.182:22-172.24.4.1:56934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:08.535446 systemd-logind[1302]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:29:08.538475 systemd-logind[1302]: Removed session 18. Jun 25 16:29:09.704000 audit[4547]: USER_ACCT pid=4547 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:09.711758 sshd[4547]: Accepted publickey for core from 172.24.4.1 port 56942 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:29:09.712853 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:09.717074 kernel: audit: type=1101 audit(1719332949.704:418): pid=4547 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:09.706000 audit[4547]: CRED_ACQ pid=4547 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:09.732072 kernel: audit: type=1103 audit(1719332949.706:419): pid=4547 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:09.745070 kernel: audit: type=1006 audit(1719332949.709:420): pid=4547 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 16:29:09.709000 audit[4547]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4bc16e50 a2=3 a3=7f7f2165f480 items=0 ppid=1 pid=4547 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.762407 kernel: audit: type=1300 audit(1719332949.709:420): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4bc16e50 a2=3 a3=7f7f2165f480 items=0 ppid=1 pid=4547 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:09.762599 kernel: audit: type=1327 audit(1719332949.709:420): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:09.709000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:09.777160 systemd-logind[1302]: New session 19 of user core. Jun 25 16:29:09.782324 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:29:09.792000 audit[4547]: USER_START pid=4547 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:09.803024 kernel: audit: type=1105 audit(1719332949.792:421): pid=4547 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:09.793000 audit[4552]: CRED_ACQ pid=4552 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:10.620201 sshd[4547]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:10.621000 audit[4547]: USER_END pid=4547 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:10.622000 audit[4547]: CRED_DISP pid=4547 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:10.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.182:22-172.24.4.1:56942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:10.626706 systemd[1]: sshd@18-172.24.4.182:22-172.24.4.1:56942.service: Deactivated successfully. Jun 25 16:29:10.630942 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:29:10.631542 systemd-logind[1302]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:29:10.634438 systemd-logind[1302]: Removed session 19. Jun 25 16:29:11.540807 systemd[1]: run-containerd-runc-k8s.io-a99be7bb007731ab8e889dc0471684535fa29ace979dacc8415a6a4c0cd60a58-runc.2roVKc.mount: Deactivated successfully. Jun 25 16:29:11.681929 containerd[1325]: time="2024-06-25T16:29:11.681159566Z" level=info msg="StopPodSandbox for \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\"" Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.879 [WARNING][4596] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bca2029-8200-41b2-a394-21e05e7bb1ca", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c", Pod:"csi-node-driver-xtzcb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie47b690226b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.882 [INFO][4596] k8s.go 608: Cleaning up netns ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.882 [INFO][4596] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" iface="eth0" netns="" Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.882 [INFO][4596] k8s.go 615: Releasing IP address(es) ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.882 [INFO][4596] utils.go 188: Calico CNI releasing IP address ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.921 [INFO][4604] ipam_plugin.go 411: Releasing address using handleID ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" HandleID="k8s-pod-network.a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.922 [INFO][4604] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.922 [INFO][4604] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.929 [WARNING][4604] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" HandleID="k8s-pod-network.a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.929 [INFO][4604] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" HandleID="k8s-pod-network.a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.930 [INFO][4604] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:11.935252 containerd[1325]: 2024-06-25 16:29:11.933 [INFO][4596] k8s.go 621: Teardown processing complete. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:29:11.936879 containerd[1325]: time="2024-06-25T16:29:11.935324604Z" level=info msg="TearDown network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\" successfully" Jun 25 16:29:11.936879 containerd[1325]: time="2024-06-25T16:29:11.935368868Z" level=info msg="StopPodSandbox for \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\" returns successfully" Jun 25 16:29:11.943837 containerd[1325]: time="2024-06-25T16:29:11.943789489Z" level=info msg="RemovePodSandbox for \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\"" Jun 25 16:29:11.963265 containerd[1325]: time="2024-06-25T16:29:11.949141888Z" level=info msg="Forcibly stopping sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\"" Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.012 [WARNING][4624] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bca2029-8200-41b2-a394-21e05e7bb1ca", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"c4e70bdbab762a3f2977842473762d7a250395dfaf16c5b7f7401cdcac1c978c", Pod:"csi-node-driver-xtzcb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie47b690226b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.012 [INFO][4624] k8s.go 608: Cleaning up netns ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.012 [INFO][4624] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" iface="eth0" netns="" Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.012 [INFO][4624] k8s.go 615: Releasing IP address(es) ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.013 [INFO][4624] utils.go 188: Calico CNI releasing IP address ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.040 [INFO][4630] ipam_plugin.go 411: Releasing address using handleID ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" HandleID="k8s-pod-network.a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.040 [INFO][4630] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.040 [INFO][4630] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.047 [WARNING][4630] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" HandleID="k8s-pod-network.a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.047 [INFO][4630] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" HandleID="k8s-pod-network.a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-csi--node--driver--xtzcb-eth0" Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.049 [INFO][4630] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:12.052289 containerd[1325]: 2024-06-25 16:29:12.050 [INFO][4624] k8s.go 621: Teardown processing complete. ContainerID="a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375" Jun 25 16:29:12.052804 containerd[1325]: time="2024-06-25T16:29:12.052329916Z" level=info msg="TearDown network for sandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\" successfully" Jun 25 16:29:12.080405 containerd[1325]: time="2024-06-25T16:29:12.080234049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:12.086313 containerd[1325]: time="2024-06-25T16:29:12.086261712Z" level=info msg="RemovePodSandbox \"a75cef97e65e6551cb4a26e2fc6048e406db86e989073956bb3d181ae68f3375\" returns successfully" Jun 25 16:29:12.087118 containerd[1325]: time="2024-06-25T16:29:12.087069088Z" level=info msg="StopPodSandbox for \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\"" Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.146 [WARNING][4648] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"92239340-c0df-4316-9d55-1b079bb7d1ce", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5", Pod:"coredns-5dd5756b68-twgps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6689c856e21", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.146 [INFO][4648] k8s.go 608: Cleaning up netns ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.146 [INFO][4648] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" iface="eth0" netns="" Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.146 [INFO][4648] k8s.go 615: Releasing IP address(es) ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.146 [INFO][4648] utils.go 188: Calico CNI releasing IP address ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.170 [INFO][4655] ipam_plugin.go 411: Releasing address using handleID ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" HandleID="k8s-pod-network.55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.171 [INFO][4655] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.171 [INFO][4655] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.178 [WARNING][4655] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" HandleID="k8s-pod-network.55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.178 [INFO][4655] ipam_plugin.go 439: Releasing address using workloadID ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" HandleID="k8s-pod-network.55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.179 [INFO][4655] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:12.182826 containerd[1325]: 2024-06-25 16:29:12.181 [INFO][4648] k8s.go 621: Teardown processing complete. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:29:12.183367 containerd[1325]: time="2024-06-25T16:29:12.182885216Z" level=info msg="TearDown network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\" successfully" Jun 25 16:29:12.183367 containerd[1325]: time="2024-06-25T16:29:12.182935231Z" level=info msg="StopPodSandbox for \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\" returns successfully" Jun 25 16:29:12.183615 containerd[1325]: time="2024-06-25T16:29:12.183574328Z" level=info msg="RemovePodSandbox for \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\"" Jun 25 16:29:12.183662 containerd[1325]: time="2024-06-25T16:29:12.183618111Z" level=info msg="Forcibly stopping sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\"" Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.231 [WARNING][4673] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"92239340-c0df-4316-9d55-1b079bb7d1ce", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"f17700e01ddcc2742ce3196d35d736503e8965f2c569fd5f0b5aafcd1f8f66c5", Pod:"coredns-5dd5756b68-twgps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6689c856e21", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.233 [INFO][4673] k8s.go 608: Cleaning up netns ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.233 [INFO][4673] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" iface="eth0" netns="" Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.233 [INFO][4673] k8s.go 615: Releasing IP address(es) ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.233 [INFO][4673] utils.go 188: Calico CNI releasing IP address ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.257 [INFO][4679] ipam_plugin.go 411: Releasing address using handleID ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" HandleID="k8s-pod-network.55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.258 [INFO][4679] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.258 [INFO][4679] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.266 [WARNING][4679] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" HandleID="k8s-pod-network.55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.266 [INFO][4679] ipam_plugin.go 439: Releasing address using workloadID ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" HandleID="k8s-pod-network.55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--twgps-eth0" Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.269 [INFO][4679] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:12.276320 containerd[1325]: 2024-06-25 16:29:12.270 [INFO][4673] k8s.go 621: Teardown processing complete. ContainerID="55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3" Jun 25 16:29:12.278907 containerd[1325]: time="2024-06-25T16:29:12.278053080Z" level=info msg="TearDown network for sandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\" successfully" Jun 25 16:29:12.289952 containerd[1325]: time="2024-06-25T16:29:12.289895150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:12.290212 containerd[1325]: time="2024-06-25T16:29:12.290187898Z" level=info msg="RemovePodSandbox \"55209a57b2c32c5f8eef7e89e142fb9f801c8573bd52773d678808ec1b5dbdd3\" returns successfully" Jun 25 16:29:12.290903 containerd[1325]: time="2024-06-25T16:29:12.290849306Z" level=info msg="StopPodSandbox for \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\"" Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.338 [WARNING][4698] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aeaccc06-becd-4fb5-9a55-596dc08726e6", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32", Pod:"coredns-5dd5756b68-fd8lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54a7e45f219", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.339 [INFO][4698] k8s.go 608: Cleaning up netns ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.341 [INFO][4698] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" iface="eth0" netns="" Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.341 [INFO][4698] k8s.go 615: Releasing IP address(es) ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.341 [INFO][4698] utils.go 188: Calico CNI releasing IP address ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.362 [INFO][4704] ipam_plugin.go 411: Releasing address using handleID ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" HandleID="k8s-pod-network.5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.362 [INFO][4704] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.362 [INFO][4704] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.374 [WARNING][4704] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" HandleID="k8s-pod-network.5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.374 [INFO][4704] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" HandleID="k8s-pod-network.5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.390 [INFO][4704] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:12.396887 containerd[1325]: 2024-06-25 16:29:12.395 [INFO][4698] k8s.go 621: Teardown processing complete. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:29:12.399216 containerd[1325]: time="2024-06-25T16:29:12.397244852Z" level=info msg="TearDown network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\" successfully" Jun 25 16:29:12.399216 containerd[1325]: time="2024-06-25T16:29:12.397279768Z" level=info msg="StopPodSandbox for \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\" returns successfully" Jun 25 16:29:12.399216 containerd[1325]: time="2024-06-25T16:29:12.397716479Z" level=info msg="RemovePodSandbox for \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\"" Jun 25 16:29:12.399216 containerd[1325]: time="2024-06-25T16:29:12.397747118Z" level=info msg="Forcibly stopping sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\"" Jun 25 16:29:12.411000 audit[4728]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=4728 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:12.411000 audit[4728]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdb2b0a210 a2=0 a3=7ffdb2b0a1fc items=0 ppid=2548 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:12.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:12.414000 audit[4728]: NETFILTER_CFG table=nat:116 family=2 entries=104 op=nft_register_chain pid=4728 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:12.414000 audit[4728]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdb2b0a210 a2=0 a3=7ffdb2b0a1fc items=0 ppid=2548 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:12.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.464 [WARNING][4723] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aeaccc06-becd-4fb5-9a55-596dc08726e6", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"087c34f58b193f44bbee2d10128a4f1e48fdf2efd0250ee8bd50591a57457a32", Pod:"coredns-5dd5756b68-fd8lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54a7e45f219", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.465 [INFO][4723] k8s.go 608: Cleaning up netns ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.465 [INFO][4723] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" iface="eth0" netns="" Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.465 [INFO][4723] k8s.go 615: Releasing IP address(es) ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.465 [INFO][4723] utils.go 188: Calico CNI releasing IP address ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.496 [INFO][4731] ipam_plugin.go 411: Releasing address using handleID ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" HandleID="k8s-pod-network.5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.496 [INFO][4731] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.496 [INFO][4731] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.504 [WARNING][4731] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" HandleID="k8s-pod-network.5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.504 [INFO][4731] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" HandleID="k8s-pod-network.5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-coredns--5dd5756b68--fd8lc-eth0" Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.507 [INFO][4731] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:12.510896 containerd[1325]: 2024-06-25 16:29:12.509 [INFO][4723] k8s.go 621: Teardown processing complete. ContainerID="5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142" Jun 25 16:29:12.512430 containerd[1325]: time="2024-06-25T16:29:12.510933057Z" level=info msg="TearDown network for sandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\" successfully" Jun 25 16:29:12.515159 containerd[1325]: time="2024-06-25T16:29:12.515104437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:12.515252 containerd[1325]: time="2024-06-25T16:29:12.515222501Z" level=info msg="RemovePodSandbox \"5d4776a5aa6936ab9348361d906b21b4034eee3cc914897170533467221c5142\" returns successfully" Jun 25 16:29:12.515823 containerd[1325]: time="2024-06-25T16:29:12.515775504Z" level=info msg="StopPodSandbox for \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\"" Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.558 [WARNING][4749] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0", GenerateName:"calico-kube-controllers-6c86b8c874-", Namespace:"calico-system", SelfLink:"", UID:"4c83fa87-9e29-4b71-bd23-94783a019eb8", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c86b8c874", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f", Pod:"calico-kube-controllers-6c86b8c874-zq2rx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62adc4ef477", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.559 [INFO][4749] k8s.go 608: Cleaning up netns ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.559 [INFO][4749] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" iface="eth0" netns="" Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.559 [INFO][4749] k8s.go 615: Releasing IP address(es) ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.559 [INFO][4749] utils.go 188: Calico CNI releasing IP address ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.584 [INFO][4755] ipam_plugin.go 411: Releasing address using handleID ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" HandleID="k8s-pod-network.5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.584 [INFO][4755] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.585 [INFO][4755] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.592 [WARNING][4755] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" HandleID="k8s-pod-network.5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.592 [INFO][4755] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" HandleID="k8s-pod-network.5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.594 [INFO][4755] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:12.597538 containerd[1325]: 2024-06-25 16:29:12.595 [INFO][4749] k8s.go 621: Teardown processing complete. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:29:12.598636 containerd[1325]: time="2024-06-25T16:29:12.598173271Z" level=info msg="TearDown network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\" successfully" Jun 25 16:29:12.598636 containerd[1325]: time="2024-06-25T16:29:12.598205873Z" level=info msg="StopPodSandbox for \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\" returns successfully" Jun 25 16:29:12.600059 containerd[1325]: time="2024-06-25T16:29:12.600021198Z" level=info msg="RemovePodSandbox for \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\"" Jun 25 16:29:12.600125 containerd[1325]: time="2024-06-25T16:29:12.600066174Z" level=info msg="Forcibly stopping sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\"" Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.641 [WARNING][4773] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0", GenerateName:"calico-kube-controllers-6c86b8c874-", Namespace:"calico-system", SelfLink:"", UID:"4c83fa87-9e29-4b71-bd23-94783a019eb8", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c86b8c874", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"0967d134a3e7bfa8233a4e3f46151c6488c3b53e9ebcb166f517771ab9d9f13f", Pod:"calico-kube-controllers-6c86b8c874-zq2rx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62adc4ef477", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.641 [INFO][4773] k8s.go 608: Cleaning up netns ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.641 [INFO][4773] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" iface="eth0" netns="" Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.641 [INFO][4773] k8s.go 615: Releasing IP address(es) ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.641 [INFO][4773] utils.go 188: Calico CNI releasing IP address ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.672 [INFO][4779] ipam_plugin.go 411: Releasing address using handleID ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" HandleID="k8s-pod-network.5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.672 [INFO][4779] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.672 [INFO][4779] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.681 [WARNING][4779] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" HandleID="k8s-pod-network.5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.682 [INFO][4779] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" HandleID="k8s-pod-network.5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--kube--controllers--6c86b8c874--zq2rx-eth0" Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.685 [INFO][4779] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:12.689221 containerd[1325]: 2024-06-25 16:29:12.686 [INFO][4773] k8s.go 621: Teardown processing complete. ContainerID="5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1" Jun 25 16:29:12.690695 containerd[1325]: time="2024-06-25T16:29:12.689248295Z" level=info msg="TearDown network for sandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\" successfully" Jun 25 16:29:12.695710 containerd[1325]: time="2024-06-25T16:29:12.695624471Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:12.696031 containerd[1325]: time="2024-06-25T16:29:12.695721346Z" level=info msg="RemovePodSandbox \"5db4b8174d1bbb501e988d47f9aa6009e56e36480ea0179c2c0dbff563d3a7c1\" returns successfully" Jun 25 16:29:14.861000 audit[4791]: NETFILTER_CFG table=filter:117 family=2 entries=9 op=nft_register_rule pid=4791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.865781 kernel: kauditd_printk_skb: 10 callbacks suppressed Jun 25 16:29:14.865872 kernel: audit: type=1325 audit(1719332954.861:428): table=filter:117 family=2 entries=9 op=nft_register_rule pid=4791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.861000 audit[4791]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffff2ab1230 a2=0 a3=7ffff2ab121c items=0 ppid=2548 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.879065 kernel: audit: type=1300 audit(1719332954.861:428): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffff2ab1230 a2=0 a3=7ffff2ab121c items=0 ppid=2548 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.893876 kubelet[2373]: I0625 16:29:14.893830 2373 topology_manager.go:215] "Topology Admit Handler" podUID="7b2a5547-49f7-4e16-845e-d54a2853e58e" podNamespace="calico-apiserver" podName="calico-apiserver-5844695b98-tjmjb" Jun 25 16:29:14.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:14.920034 kernel: audit: type=1327 audit(1719332954.861:428): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:14.925000 audit[4791]: NETFILTER_CFG table=nat:118 family=2 entries=44 op=nft_register_rule pid=4791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.931052 kernel: audit: type=1325 audit(1719332954.925:429): table=nat:118 family=2 entries=44 op=nft_register_rule pid=4791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.925000 audit[4791]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffff2ab1230 a2=0 a3=7ffff2ab121c items=0 ppid=2548 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.946098 kernel: audit: type=1300 audit(1719332954.925:429): arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffff2ab1230 a2=0 a3=7ffff2ab121c items=0 ppid=2548 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:14.954077 kernel: audit: type=1327 audit(1719332954.925:429): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:14.970000 audit[4793]: NETFILTER_CFG table=filter:119 family=2 entries=10 op=nft_register_rule pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.970000 audit[4793]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffedbb6cf80 a2=0 a3=7ffedbb6cf6c items=0 ppid=2548 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.977631 kernel: audit: type=1325 audit(1719332954.970:430): table=filter:119 family=2 entries=10 op=nft_register_rule pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.977714 kernel: audit: type=1300 audit(1719332954.970:430): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffedbb6cf80 a2=0 a3=7ffedbb6cf6c items=0 ppid=2548 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:14.988026 kernel: audit: type=1327 audit(1719332954.970:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:14.973000 audit[4793]: NETFILTER_CFG table=nat:120 family=2 entries=44 op=nft_register_rule pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.993039 kernel: audit: type=1325 audit(1719332954.973:431): table=nat:120 family=2 entries=44 op=nft_register_rule pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.973000 audit[4793]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffedbb6cf80 a2=0 a3=7ffedbb6cf6c items=0 ppid=2548 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:15.020847 kubelet[2373]: I0625 16:29:15.020788 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b2a5547-49f7-4e16-845e-d54a2853e58e-calico-apiserver-certs\") pod \"calico-apiserver-5844695b98-tjmjb\" (UID: \"7b2a5547-49f7-4e16-845e-d54a2853e58e\") " pod="calico-apiserver/calico-apiserver-5844695b98-tjmjb" Jun 25 16:29:15.021176 kubelet[2373]: I0625 16:29:15.021162 2373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz4r6\" (UniqueName: \"kubernetes.io/projected/7b2a5547-49f7-4e16-845e-d54a2853e58e-kube-api-access-nz4r6\") pod \"calico-apiserver-5844695b98-tjmjb\" (UID: \"7b2a5547-49f7-4e16-845e-d54a2853e58e\") " pod="calico-apiserver/calico-apiserver-5844695b98-tjmjb" Jun 25 16:29:15.122677 kubelet[2373]: E0625 16:29:15.122643 2373 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:29:15.153470 kubelet[2373]: E0625 16:29:15.153401 2373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2a5547-49f7-4e16-845e-d54a2853e58e-calico-apiserver-certs podName:7b2a5547-49f7-4e16-845e-d54a2853e58e nodeName:}" failed. No retries permitted until 2024-06-25 16:29:15.622941619 +0000 UTC m=+124.165975734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/7b2a5547-49f7-4e16-845e-d54a2853e58e-calico-apiserver-certs") pod "calico-apiserver-5844695b98-tjmjb" (UID: "7b2a5547-49f7-4e16-845e-d54a2853e58e") : secret "calico-apiserver-certs" not found Jun 25 16:29:15.633781 systemd[1]: Started sshd@19-172.24.4.182:22-172.24.4.1:60284.service - OpenSSH per-connection server daemon (172.24.4.1:60284). Jun 25 16:29:15.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.182:22-172.24.4.1:60284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:15.847301 containerd[1325]: time="2024-06-25T16:29:15.847166652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844695b98-tjmjb,Uid:7b2a5547-49f7-4e16-845e-d54a2853e58e,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:29:16.078900 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:29:16.079314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0f29ca257e4: link becomes ready Jun 25 16:29:16.076807 systemd-networkd[1092]: cali0f29ca257e4: Link UP Jun 25 16:29:16.078235 systemd-networkd[1092]: cali0f29ca257e4: Gained carrier Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:15.965 [INFO][4798] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0 calico-apiserver-5844695b98- calico-apiserver 7b2a5547-49f7-4e16-845e-d54a2853e58e 1166 0 2024-06-25 16:29:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5844695b98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815-2-4-3-54e11b9a94.novalocal calico-apiserver-5844695b98-tjmjb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f29ca257e4 [] []}} ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Namespace="calico-apiserver" Pod="calico-apiserver-5844695b98-tjmjb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:15.966 [INFO][4798] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Namespace="calico-apiserver" Pod="calico-apiserver-5844695b98-tjmjb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.000 [INFO][4810] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" HandleID="k8s-pod-network.ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.016 [INFO][4810] ipam_plugin.go 264: Auto assigning IP ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" HandleID="k8s-pod-network.ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ff890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815-2-4-3-54e11b9a94.novalocal", "pod":"calico-apiserver-5844695b98-tjmjb", "timestamp":"2024-06-25 16:29:16.000763944 +0000 UTC"}, Hostname:"ci-3815-2-4-3-54e11b9a94.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.017 [INFO][4810] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.017 [INFO][4810] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.017 [INFO][4810] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-3-54e11b9a94.novalocal' Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.020 [INFO][4810] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.025 [INFO][4810] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.035 [INFO][4810] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.048 [INFO][4810] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.051 [INFO][4810] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.051 [INFO][4810] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.052 [INFO][4810] ipam.go 1685: Creating new handle: k8s-pod-network.ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6 Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.056 [INFO][4810] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.065 [INFO][4810] ipam.go 1216: Successfully claimed IPs: [192.168.32.133/26] block=192.168.32.128/26 handle="k8s-pod-network.ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.065 [INFO][4810] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.133/26] handle="k8s-pod-network.ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" host="ci-3815-2-4-3-54e11b9a94.novalocal" Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.066 [INFO][4810] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:16.106553 containerd[1325]: 2024-06-25 16:29:16.066 [INFO][4810] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.133/26] IPv6=[] ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" HandleID="k8s-pod-network.ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Workload="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" Jun 25 16:29:16.107397 containerd[1325]: 2024-06-25 16:29:16.069 [INFO][4798] k8s.go 386: Populated endpoint ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Namespace="calico-apiserver" Pod="calico-apiserver-5844695b98-tjmjb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0", GenerateName:"calico-apiserver-5844695b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b2a5547-49f7-4e16-845e-d54a2853e58e", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 29, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5844695b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"", Pod:"calico-apiserver-5844695b98-tjmjb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f29ca257e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:16.107397 containerd[1325]: 2024-06-25 16:29:16.069 [INFO][4798] k8s.go 387: Calico CNI using IPs: [192.168.32.133/32] ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Namespace="calico-apiserver" Pod="calico-apiserver-5844695b98-tjmjb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" Jun 25 16:29:16.107397 containerd[1325]: 2024-06-25 16:29:16.069 [INFO][4798] dataplane_linux.go 68: Setting the host side veth name to cali0f29ca257e4 ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Namespace="calico-apiserver" Pod="calico-apiserver-5844695b98-tjmjb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" Jun 25 16:29:16.107397 containerd[1325]: 2024-06-25 16:29:16.080 [INFO][4798] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Namespace="calico-apiserver" Pod="calico-apiserver-5844695b98-tjmjb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" Jun 25 16:29:16.107397 containerd[1325]: 2024-06-25 16:29:16.081 [INFO][4798] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Namespace="calico-apiserver" Pod="calico-apiserver-5844695b98-tjmjb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0", GenerateName:"calico-apiserver-5844695b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b2a5547-49f7-4e16-845e-d54a2853e58e", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 29, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5844695b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-3-54e11b9a94.novalocal", ContainerID:"ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6", Pod:"calico-apiserver-5844695b98-tjmjb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f29ca257e4", MAC:"ba:42:e6:ae:31:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:16.107397 containerd[1325]: 2024-06-25 16:29:16.104 [INFO][4798] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6" Namespace="calico-apiserver" Pod="calico-apiserver-5844695b98-tjmjb" WorkloadEndpoint="ci--3815--2--4--3--54e11b9a94.novalocal-k8s-calico--apiserver--5844695b98--tjmjb-eth0" Jun 25 16:29:16.142000 audit[4831]: NETFILTER_CFG table=filter:121 family=2 entries=51 op=nft_register_chain pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:16.142000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=26260 a0=3 a1=7ffdc867fda0 a2=0 a3=7ffdc867fd8c items=0 ppid=3548 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.142000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:16.180465 containerd[1325]: time="2024-06-25T16:29:16.180377436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:16.180645 containerd[1325]: time="2024-06-25T16:29:16.180453391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:16.180645 containerd[1325]: time="2024-06-25T16:29:16.180477837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:16.180645 containerd[1325]: time="2024-06-25T16:29:16.180496112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:16.212699 systemd[1]: run-containerd-runc-k8s.io-ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6-runc.Lbr87Y.mount: Deactivated successfully. Jun 25 16:29:16.276618 containerd[1325]: time="2024-06-25T16:29:16.276577333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844695b98-tjmjb,Uid:7b2a5547-49f7-4e16-845e-d54a2853e58e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6\"" Jun 25 16:29:16.280074 containerd[1325]: time="2024-06-25T16:29:16.278673800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:29:16.801000 audit[4795]: USER_ACCT pid=4795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:16.803000 audit[4795]: CRED_ACQ pid=4795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:16.804316 sshd[4795]: Accepted publickey for core from 172.24.4.1 port 60284 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:29:16.803000 audit[4795]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6aaa5aa0 a2=3 a3=7fed4946c480 items=0 ppid=1 pid=4795 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.803000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:16.805710 sshd[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:16.817067 systemd-logind[1302]: New session 20 of user core. Jun 25 16:29:16.820379 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:29:16.830000 audit[4795]: USER_START pid=4795 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:16.833000 audit[4876]: CRED_ACQ pid=4876 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:17.576431 sshd[4795]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:17.576000 audit[4795]: USER_END pid=4795 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:17.577000 audit[4795]: CRED_DISP pid=4795 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:17.579815 systemd[1]: sshd@19-172.24.4.182:22-172.24.4.1:60284.service: Deactivated successfully. Jun 25 16:29:17.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.182:22-172.24.4.1:60284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:17.581110 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:29:17.581459 systemd-logind[1302]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:29:17.583150 systemd-logind[1302]: Removed session 20. Jun 25 16:29:17.867576 systemd-networkd[1092]: cali0f29ca257e4: Gained IPv6LL Jun 25 16:29:20.653724 containerd[1325]: time="2024-06-25T16:29:20.653671005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:20.655668 containerd[1325]: time="2024-06-25T16:29:20.655604621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:29:20.663441 containerd[1325]: time="2024-06-25T16:29:20.663407053Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:20.664400 containerd[1325]: time="2024-06-25T16:29:20.664341189Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:20.665766 containerd[1325]: time="2024-06-25T16:29:20.665741111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:20.666737 containerd[1325]: time="2024-06-25T16:29:20.666646642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.387921395s" Jun 25 16:29:20.666816 containerd[1325]: time="2024-06-25T16:29:20.666737936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:29:20.670471 containerd[1325]: time="2024-06-25T16:29:20.670441907Z" level=info msg="CreateContainer within sandbox \"ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:29:20.702916 containerd[1325]: time="2024-06-25T16:29:20.702837142Z" level=info msg="CreateContainer within sandbox \"ebf2b86ede2ad77ae2648a0ab5255d82c1c7ff8ecd64960a67d5531af92f92d6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d0a970678290b11b87bd311c9194c357589f2a79fca53556a0423eee359217f\"" Jun 25 16:29:20.704031 containerd[1325]: time="2024-06-25T16:29:20.703949427Z" level=info msg="StartContainer for \"6d0a970678290b11b87bd311c9194c357589f2a79fca53556a0423eee359217f\"" Jun 25 16:29:21.276884 containerd[1325]: time="2024-06-25T16:29:21.276794015Z" level=info msg="StartContainer for \"6d0a970678290b11b87bd311c9194c357589f2a79fca53556a0423eee359217f\" returns successfully" Jun 25 16:29:21.488074 kernel: kauditd_printk_skb: 16 callbacks suppressed Jun 25 16:29:21.488195 kernel: audit: type=1325 audit(1719332961.483:442): table=filter:122 family=2 entries=10 op=nft_register_rule pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.483000 audit[4933]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.483000 audit[4933]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffee8ee0550 a2=0 a3=7ffee8ee053c items=0 ppid=2548 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.499063 kernel: audit: type=1300 audit(1719332961.483:442): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffee8ee0550 a2=0 a3=7ffee8ee053c items=0 ppid=2548 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.499000 audit[4933]: NETFILTER_CFG table=nat:123 family=2 entries=44 op=nft_register_rule pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.505422 kernel: audit: type=1327 audit(1719332961.483:442): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.505473 kernel: audit: type=1325 audit(1719332961.499:443): table=nat:123 family=2 entries=44 op=nft_register_rule pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.499000 audit[4933]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffee8ee0550 a2=0 a3=7ffee8ee053c items=0 ppid=2548 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.508172 kernel: audit: type=1300 audit(1719332961.499:443): arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffee8ee0550 a2=0 a3=7ffee8ee053c items=0 ppid=2548 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.509668 kernel: audit: type=1327 audit(1719332961.499:443): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.531681 kubelet[2373]: I0625 16:29:21.531567 2373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5844695b98-tjmjb" podStartSLOduration=3.142874593 podCreationTimestamp="2024-06-25 16:29:14 +0000 UTC" firstStartedPulling="2024-06-25 16:29:16.278293177 +0000 UTC m=+124.821327282" lastFinishedPulling="2024-06-25 16:29:20.666946042 +0000 UTC m=+129.209980147" observedRunningTime="2024-06-25 16:29:21.529353355 +0000 UTC m=+130.072387460" watchObservedRunningTime="2024-06-25 16:29:21.531527458 +0000 UTC m=+130.074561563" Jun 25 16:29:21.550000 audit[4935]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=4935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.550000 audit[4935]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd9bbb6690 a2=0 a3=7ffd9bbb667c items=0 ppid=2548 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.555884 kernel: audit: type=1325 audit(1719332961.550:444): table=filter:124 family=2 entries=10 op=nft_register_rule pid=4935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.555948 kernel: audit: type=1300 audit(1719332961.550:444): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd9bbb6690 a2=0 a3=7ffd9bbb667c items=0 ppid=2548 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.558017 kernel: audit: type=1327 audit(1719332961.550:444): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.558000 audit[4935]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=4935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.558000 audit[4935]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffd9bbb6690 a2=0 a3=7ffd9bbb667c items=0 ppid=2548 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.562023 kernel: audit: type=1325 audit(1719332961.558:445): table=nat:125 family=2 entries=44 op=nft_register_rule pid=4935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:22.508000 audit[4939]: NETFILTER_CFG table=filter:126 family=2 entries=9 op=nft_register_rule pid=4939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:22.508000 audit[4939]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff3eba7140 a2=0 a3=7fff3eba712c items=0 ppid=2548 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:22.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:22.512000 audit[4939]: NETFILTER_CFG table=nat:127 family=2 entries=51 op=nft_register_chain pid=4939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:22.512000 audit[4939]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7fff3eba7140 a2=0 a3=7fff3eba712c items=0 ppid=2548 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:22.512000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:22.585351 systemd[1]: Started sshd@20-172.24.4.182:22-172.24.4.1:60294.service - OpenSSH per-connection server daemon (172.24.4.1:60294). Jun 25 16:29:22.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.182:22-172.24.4.1:60294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:23.169147 systemd[1]: run-containerd-runc-k8s.io-c5858b3cf2376661e32d8be133805efd05e2b738ff9df74470a4b252de6ac6fa-runc.QnyrjW.mount: Deactivated successfully. Jun 25 16:29:23.537000 audit[4961]: NETFILTER_CFG table=filter:128 family=2 entries=8 op=nft_register_rule pid=4961 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:23.537000 audit[4961]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd87a44ed0 a2=0 a3=7ffd87a44ebc items=0 ppid=2548 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:23.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:23.542000 audit[4961]: NETFILTER_CFG table=nat:129 family=2 entries=58 op=nft_register_chain pid=4961 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:23.542000 audit[4961]: SYSCALL arch=c000003e syscall=46 success=yes exit=20452 a0=3 a1=7ffd87a44ed0 a2=0 a3=7ffd87a44ebc items=0 ppid=2548 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:23.542000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:24.254000 audit[4940]: USER_ACCT pid=4940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:24.255394 sshd[4940]: Accepted publickey for core from 172.24.4.1 port 60294 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:29:24.255000 audit[4940]: CRED_ACQ pid=4940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:24.255000 audit[4940]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdda805580 a2=3 a3=7fb63a0e7480 items=0 ppid=1 pid=4940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:24.255000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:24.257118 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:24.263114 systemd-logind[1302]: New session 21 of user core. Jun 25 16:29:24.265270 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:29:24.270000 audit[4940]: USER_START pid=4940 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:24.271000 audit[4968]: CRED_ACQ pid=4968 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:25.538521 sshd[4940]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:25.539000 audit[4940]: USER_END pid=4940 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:25.539000 audit[4940]: CRED_DISP pid=4940 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:25.544238 systemd[1]: sshd@20-172.24.4.182:22-172.24.4.1:60294.service: Deactivated successfully. Jun 25 16:29:25.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.182:22-172.24.4.1:60294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:25.547368 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:29:25.547943 systemd-logind[1302]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:29:25.552725 systemd-logind[1302]: Removed session 21. Jun 25 16:29:30.556774 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:29:30.557170 kernel: audit: type=1130 audit(1719332970.550:459): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.182:22-172.24.4.1:44536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:30.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.182:22-172.24.4.1:44536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:30.550953 systemd[1]: Started sshd@21-172.24.4.182:22-172.24.4.1:44536.service - OpenSSH per-connection server daemon (172.24.4.1:44536). Jun 25 16:29:32.015000 audit[4983]: USER_ACCT pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.018228 sshd[4983]: Accepted publickey for core from 172.24.4.1 port 44536 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:29:32.018881 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:32.016000 audit[4983]: CRED_ACQ pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.020453 kernel: audit: type=1101 audit(1719332972.015:460): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.020517 kernel: audit: type=1103 audit(1719332972.016:461): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.025018 kernel: audit: type=1006 audit(1719332972.016:462): pid=4983 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 16:29:32.025244 kernel: audit: type=1300 audit(1719332972.016:462): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0f5088a0 a2=3 a3=7f4638db4480 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:32.016000 audit[4983]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0f5088a0 a2=3 a3=7f4638db4480 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:32.016000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:32.031023 kernel: audit: type=1327 audit(1719332972.016:462): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:32.034472 systemd-logind[1302]: New session 22 of user core. Jun 25 16:29:32.042442 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:29:32.051000 audit[4983]: USER_START pid=4983 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.057019 kernel: audit: type=1105 audit(1719332972.051:463): pid=4983 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.056000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.063018 kernel: audit: type=1103 audit(1719332972.056:464): pid=4986 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.924343 sshd[4983]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:32.925000 audit[4983]: USER_END pid=4983 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.934033 kernel: audit: type=1106 audit(1719332972.925:465): pid=4983 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.934152 kernel: audit: type=1104 audit(1719332972.925:466): pid=4983 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.925000 audit[4983]: CRED_DISP pid=4983 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:32.938232 systemd[1]: sshd@21-172.24.4.182:22-172.24.4.1:44536.service: Deactivated successfully. Jun 25 16:29:32.939364 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:29:32.940276 systemd-logind[1302]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:29:32.943952 systemd-logind[1302]: Removed session 22. Jun 25 16:29:32.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.182:22-172.24.4.1:44536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:37.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.182:22-172.24.4.1:50214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:37.946738 systemd[1]: Started sshd@22-172.24.4.182:22-172.24.4.1:50214.service - OpenSSH per-connection server daemon (172.24.4.1:50214). Jun 25 16:29:37.951870 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:37.957276 kernel: audit: type=1130 audit(1719332977.946:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.182:22-172.24.4.1:50214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:39.115000 audit[5003]: USER_ACCT pid=5003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.116513 sshd[5003]: Accepted publickey for core from 172.24.4.1 port 50214 ssh2: RSA SHA256:28OIdiFmM2tDKGFH/eV86Nr5Hdswek2nBOxwiGuzcsE Jun 25 16:29:39.122050 sshd[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:39.126171 kernel: audit: type=1101 audit(1719332979.115:469): pid=5003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.126313 kernel: audit: type=1103 audit(1719332979.119:470): pid=5003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.119000 audit[5003]: CRED_ACQ pid=5003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.139715 kernel: audit: type=1006 audit(1719332979.120:471): pid=5003 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:29:39.143661 kernel: audit: type=1300 audit(1719332979.120:471): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1621afe0 a2=3 a3=7f1629c7d480 items=0 ppid=1 pid=5003 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:39.120000 audit[5003]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1621afe0 a2=3 a3=7f1629c7d480 items=0 ppid=1 pid=5003 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:39.157592 kernel: audit: type=1327 audit(1719332979.120:471): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:39.120000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:39.159038 systemd-logind[1302]: New session 23 of user core. Jun 25 16:29:39.162627 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:29:39.180000 audit[5003]: USER_START pid=5003 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.192054 kernel: audit: type=1105 audit(1719332979.180:472): pid=5003 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.192000 audit[5006]: CRED_ACQ pid=5006 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.203169 kernel: audit: type=1103 audit(1719332979.192:473): pid=5006 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.842293 sshd[5003]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:39.844000 audit[5003]: USER_END pid=5003 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.862655 kernel: audit: type=1106 audit(1719332979.844:474): pid=5003 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.862822 kernel: audit: type=1104 audit(1719332979.845:475): pid=5003 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.845000 audit[5003]: CRED_DISP pid=5003 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Jun 25 16:29:39.875894 systemd-logind[1302]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:29:39.878634 systemd[1]: sshd@22-172.24.4.182:22-172.24.4.1:50214.service: Deactivated successfully. Jun 25 16:29:39.882606 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:29:39.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.182:22-172.24.4.1:50214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:39.886468 systemd-logind[1302]: Removed session 23. Jun 25 16:29:41.276802 systemd[1]: run-containerd-runc-k8s.io-c5858b3cf2376661e32d8be133805efd05e2b738ff9df74470a4b252de6ac6fa-runc.2C4TKr.mount: Deactivated successfully. Jun 25 16:29:41.542665 systemd[1]: run-containerd-runc-k8s.io-a99be7bb007731ab8e889dc0471684535fa29ace979dacc8415a6a4c0cd60a58-runc.9cRNDn.mount: Deactivated successfully.